Jump to Content
Google Cloud

Introducing Google Container Engine (GKE) node pools

May 26, 2016
Fabio Yeon

Google Cloud Platform

Editor's note: Updated May 27, 2016 with guidance on running nodes in multiple zones.

Google Container Engine (GKE) aims to be the best place to set up and manage your Kubernetes clusters. When creating a cluster, users have always been able to select options like the nodes’ machine type, disk size, etc. but that applied to all the nodes, making the cluster homogenous. Until now, it was very difficult to have a cluster with a heterogeneous machine configuration.

That’s where node pools come in, a new feature in Google Container Engine that's now generally available. A node pool is simply a collection, or “pool,” of machines with the same configuration. Now instead of a uniform cluster where all the nodes are the same, you can have multiple node pools that better suit your needs. Imagine you created a cluster composed of n1-standard-2 machines, and realize that you need more CPU. You can now easily add a node pool to your existing cluster composed of n1-standard-4 (or bigger) machines.

All this happens through the new “node-pools” commands available via the gcloud command line tool. Let’s take a deeper look at using this new feature.

Creating your cluster


A node pool must belong to a cluster and all clusters have a default node pool named “default-pool”. So, let’s create a new cluster (we assume you’ve set the project and zone defaults in gcloud):

Loading...

Like before, you can still specify some node configuration options, like “--machine-type” to specify a machine type, or “--num-nodes” to set the initial number of nodes.

Creating a new node pool


Once the cluster has been created, you can see its node pools with the new “node-pools” top level object (Note: You may need to upgrade your gcloud commands via “gcloud components update” to use these new options.).

Loading...

Notice that you must now specify a new parameter, “--cluster”. Recall that node pools belong to a cluster, so you must specify the cluster with which to use node-pools commands. You can also set it as the default in config by calling:

Loading...

Also, if you have an existing cluster on GKE, your clusters will have been automatically migrated to “default-pool,” with the original cluster node configuration.

Let’s create a new node pool on our “work” with a custom machine type of 2 CPUs and 12 GB of RAM:

Loading...

This creates a new node pool with 4 nodes, using custom machine VMs and 200 GB boot disks. Now, when you list your node pools, you get:

Loading...

And if you list the nodes in kubectl:

Loading...

With Kubernetes 1.2, the nodes on each node pool are also automatically assigned the node label, “cloud.google.com/gke-nodepool=”. With node labels, it’s possible to have heterogeneous nodes within your cluster, and schedule your pods into the specific nodes that meet their needs. Perhaps a set of pods need a lot of memory — allocate a high-mem node pool and schedule them there. Or perhaps they need more local disk space — assign them to a node pool with a lot of local storage capacity. More configuration options for nodes are being considered.

More fun with node pools


There are also other, more advanced scenarios for node pools. Suppose you want to upgrade the nodes in your cluster to the latest Kubernetes release, but need finer grained control of the transition (e.g., to perform A/B testing, or to migrate the pods slowly). When a new release of Kubernetes is available on GKE, simply create a new node pool; all node pools have the same version as the cluster master, which will be automatically updated to the latest Kubernetes release. Here’s how to create a new node pool with the appropriate version:

Loading...

You can now go to “kubectl” and update your replication controller to schedule your pods with the label selector “cloud.google.com/gke-nodepool=my-1-2-4-pool”. Your pods will then be rescheduled from the old nodes to the new pool nodes. After the verifications are complete, continue the transition with other pods, until all of the old nodes are effectively empty. You can then delete your original node pool:

Loading...

And voila, all of your pods are now running on nodes running the latest version of Kubernetes!

Node pools across multiple zones


Many customers have requested the ability to run nodes in multiple zones to improve the availability of their application in the unlikely event of a zone outage. Node pools support multi-zone clusters automatically. To create a multi-zone cluster, pass the “--additional-zones” flag to gcloud and specify one or more zones within the same region of your cluster:

Loading...

If you create additional node pools, they'll automatically span all of the zones in your cluster, so nodes will be created in those additional zones as well. Note that the “--num-nodes” option is per zone, and due to the multiplicative effect in the total number of nodes created, be aware that you may hit your quota limits.

Loading...

When you list your nodes in the Kubernetes API, you’ll see that they span all of the zones you specifed, and are automatically labeled with “failure-domain.beta.kubernetes.io/zone”:

Loading...

Conclusion


The new node pools feature in GKE enables more powerful and flexible scenarios for your Kubernetes clusters. As always, we’d love to hear your feedback and help guide us on what you’d like to see in the product.

Posted in