If you haven’t heard of or used Kubernetes yet, I highly recommend taking a look (see the link below). I won’t take too much time here today to talk about the Kubernetes project because there is just too much to cover. Instead I will be writing a series of posts about how to work with Kubernetes and share some tricks and tips that I have discovered in my experiences so far with the tool. Since the project is still very young and moving incredibly quickly, the best place to get information is either the IRC channel (#google-containers), the mailing list, or their github project. Please go look at the github project if you are new to Kubernetes, or are interested in learning more about it, especially their docs and examples sections.
As I said, updates and progress have been extremely fast paced, so it isn’t uncommon for things in the Kubernetes project to seem obselete before they have even been implemented. For example, the command line tool for interacting with a Kubernetes cluster has already changed faces a few times, which was confusing to me when I first started out. Kubecfg is on the way out and the project maintainers are working on removing old references to it. On the flip side, the kubectl command is maturing quite nicely and will be around for awhile, along with the subcommand that I will be describing.
Now that I have all the basic background stuff out of the way; the version of kubectl I am using for this demonstration is v0.9.1. If you just discovered Kubernetes or have been using kubecfg (as explained above) you will want to make sure to get more familiar with kubectl because it is the preferred tool going forward, at least at this point.
There are a few handy subcommands that come baked in to the kubectl command. The first is the resize command. This command allows you to scale the number of running containers being managed by Kubernetes up or down on the fly. Obviously this can be really powerful! The syntax is pretty straight forward and below I have an example listed.
kubectl resize –current-replicas=6 –replicas=0 rc test-rc
The –current-replicas argument is optional, the –replicas defines the *desired* number of replicas to have runing, rc specifies this is a replication controller, and finally, test-rc is the name of the replication controller to scale. After you scale your replication controller you can check out the status quickly via the following command.
kubectl get pod
Another handy tool to have when working with Kubernetes is the ability to deploy new images as a rolling update.
kubectl rollingupdate test-rc -f test-rc-2.yml –update-period=”10s”
The rollingupdate command takes a few arguments. The first is the name of the current replication controller that you would like to update. The second is to replace it with the yml file of the new replication controller and the third optional argument is the –update-period, which allows a user to override the default time that it takes to spin up a new container and spin down an old.
Below is an example of what your test-rc-2.yml file may look like.
kind: ReplicationController apiVersion: v1beta1 id: test-rc-2 namespace: default desiredState: replicas: 1 replicaSelector: name: test-rc version: v2 podTemplate: labels: name: test-rc version: v2 desiredState: manifest: version: v1beta1 id: test-rc containers: - name: test-image image: test/test:new-tag imagePullPolicy: PullAlways ports: - name: test-port containerPort: 8080
There are a few important things to notice. The first is that the id must be unique, it can’t be a name that is already in use by another replication controller. All of the label names should remain the same except for the version. The version is used to signify the new replication controller is a running a new docker image. The version number should be unique, which will help keep track of which image version is running.
Another thing to note. If your original replication controller did not contain a unique key (like version) then you will need to update the original replication controller first, adding a unique key, before attempting to run the rolling update.
If both replication controllers don’t have the same format you will get an error similar to this.
test-rc.yml must specify a matching key with non-equal value in Selector for <selector name>
So that’s pretty much it for now. I will revisit this post again in the future as new flags and subcommands are added to kubectl for managing and updating replication controllers. I also plan on writing a few more posts about other aspects and areas of kubectl and running Kubernetes, so please check back soon!