Google Kubernetes Engine: 5 Key Features and Getting Started

Google Kubernetes Engine: 5 Key Features and Getting Started

Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications, provided as part of Google Cloud platform

October 31, 2023

What Is Google Kubernetes Engine (GKE)? 

Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications, provided as part of Google Cloud platform. GKE takes the power of Kubernetes, a popular open-source platform designed to automate deploying, scaling, and managing containerized applications, and provides it as a managed service on Google infrastructure. 

GKE was the world’s first managed Kubernetes service. Google was the original developer of Kubernetes, which was later donated to the Cloud Native Computing Foundation (CNCF), and it has the longest track record of providing cloud-based Kubernetes services.

This is part of a series of articles about container platforms

In this article:

Core Features of GKE 

1. Managed Kubernetes

GKE provides automated upgrades, repairs, and scaling, allowing you to focus on deploying and running your applications without worrying about the underlying infrastructure.

GKE automatically takes care of scaling and managing your clusters. It even offers auto-repair capabilities, which means if a node fails, GKE will automatically replace it, ensuring your applications have sufficient resources within the cluster.

Furthermore, GKE provides automatic upgrades. This means it will keep your clusters up-to-date with the latest Kubernetes version, ensuring you have access to the latest features and security patches. This feature alone can save developers and operations teams significant efforts.

2. Advanced Cluster Management

With GKE, you have complete control over your Kubernetes clusters. You can easily create, configure, and manage your clusters from the Google Cloud Console or using the GKE API.

GKE’s cluster management also includes built-in logging and monitoring using Google Cloud Logging and Cloud Monitoring. These tools provide you with detailed insights into your application’s performance and health, helping you quickly identify and resolve issues.

GKE’s cluster management capabilities extend to security and compliance as well. With GKE, you can enforce role-based access control (RBAC), use Google’s private network, and take advantage of Google Cloud’s compliance certifications.

3. Automated Scalability

GKE provides automated scalability, allowing your applications to handle varying loads seamlessly. With GKE’s fully managed Cluster Autoscaler, you can automatically adjust the size of your cluster based on the demands of your workloads. This means your applications can scale up during periods of high demand and scale down during periods of low demand, helping you optimize resource usage and reduce costs.

Additionally, GKE manages the Vertical Pod Autoscaling feature in Kubernetes, which automatically adjusts the CPU and memory resources for your containers based on their requirements. This feature is particularly useful for workloads with unpredictable resource demands, as it eliminates the need for manual tuning and ensures your applications always have the resources they need to perform optimally.

4. Multi-Cluster Support

With GKE, you can manage multiple clusters across multiple regions and even multiple clouds from a single control plane. This feature provides you with global coverage, high availability, and the flexibility to run your applications where they make the most sense for your business.

GKE also simplifies the management of multi-cluster environments. With GKE’s Anthos platform, you can manage all your clusters, including those deployed on-premises or in other clouds, from a single, unified interface. This enhances visibility and control across your entire hybrid infrastructure.

5. Integrated Developer Tools

Finally, GKE offers a range of integrated developer tools, including:

  • Cloud Code: Provides extensions for popular IDEs like Visual Studio Code and IntelliJ, providing a comfortable and efficient development environment for building Kubernetes applications. 
  • Cloud Build: A fully managed CI/CD platform that automates the build and deploy process, helping you deliver high-quality code faster. 
  • Cloud Source Repositories: A private Git repository service that works with Cloud Build to trigger builds, tests, and deployments.

How Does Google Kubernetes Engine Work? 

The diagram below illustrates the basic architecture of GKE. GKE sets up Kubernetes nodes  on Google Cloud infrastructure, deploys a control plane for your cluster, and helps you manage persistent storage.

GKE architecture

Source: Google Cloud

Nodes, Pods, Services, and Deployments

Nodes are the worker machines in Kubernetes. A node may be a Google Cloud VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods, which are managed by the control plane. Pods are the smallest unit in the Kubernetes object model that you create or deploy. They can contain one or more containers, such as Docker containers.

Services in Kubernetes are an abstract way to expose an application running on a set of pods as a network service. Deployments, on the other hand, are a way to describe the desired state of a pod or a ReplicaSet. They can be used to create new resources, or replace existing ones by new ones.

Control Plane and Data Plane

The Control Plane and Data Plane are two crucial components of GKE. The Control Plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. It also manages networking rules and load balancing.

The Data Plane, on the other hand, comprises the compute resources used by applications. It also includes the underlying storage resources. In GKE, the Control Plane is managed by Google, while you maintain control over the Data Plane.

Persistent Storage Options

GKE offers a variety of persistent storage options. These options allow you to store data that survives even if the pod that created it does not. Some of these options include persistent volumes, persistent volume claims, and storage classes. GKE integrates with Compute Engine Persistent Disks, providing a high-performance, readily available storage solution.

Related content: Read our guide to container deployment

Google Kubernetes Engine Pricing 

In Google Kubernetes Engine Standard Edition, pricing is based on the Google Cloud instances used in your cluster, the associated storage and networking costs, and an additional charge of $0.10 per cluster per hour for running the control plane. 

For example, if you run a cluster with three n1-standard-1 instances in the us-east-4 region for a month, your total cost would be around $117, plus $7.30 for the control plane.

There are also two other pricing models:

  • Enterprise Edition: Includes multi-team and multi-cluster functionality, with advanced security, service mesh, and improved console experience. Priced at $0.0083 per vCPU per hour.
  • Autopilot Mode: Flat fee of $0.10 per hour per cluster, with a charge for CPU, memory, and ephemeral storage resources used by each pod you run within your clusters.

Google Cloud pricing is subject to change; for more details and up-to-date pricing see the official pricing page. Google provides a pricing calculator, which you can use to estimate your costs. 

Security in GKE

Security is a top priority in GKE, and there are several security mechanisms available:

Integrated Role-Based Access Control (RBAC)

RBAC in GKE allows you to regulate who has access to what resources. It does this by grouping users into roles, and assigning these roles permissions to perform certain operations. This way, you can ensure that only authorized individuals have access to specific resources.

Private Clusters

Private clusters in GKE provide enhanced security by ensuring that nodes have internal IP addresses only. This means that they aren’t accessible from the internet, but can still access the internet for necessary updates.

Network Policies

Network policies in GKE allow you to regulate how groups of pods communicate with each other and other network endpoints. By defining network policies, you can isolate your pods and protect them from unwanted traffic.

Vulnerability Scanning

GKE’s vulnerability scanning is another crucial security feature. It scans container images for vulnerabilities, providing you with a comprehensive report of potential security issues. This allows you to mitigate these vulnerabilities before they become a problem. This feature is part of Google’s Security Posture Dashboard, which might incur additional charges.

Getting Started with Google Kubernetes Engine 

Here are the main steps involved in creating your first GKE cluster.

1. Setting Up a Google Cloud Account

The first step to mastering GKE is setting up a Google Cloud account. Start by navigating to the Google Cloud homepage. If you don’t already have a Google account, you’ll need to create one. Once you’re logged in, you’ll be prompted to start a free trial. This includes $300 worth of credit for any Google Cloud product, including GKE.

After starting the trial, you’ll be asked to create your first project. A project in Google Cloud is a unique workspace where all your GKE resources will reside. Remember to give it a meaningful name and make a note of the project ID as you’ll need it later.

2. Enable Kubernetes Engine API

Once your Google Cloud account is set up and your project is created, the next step is to enable the Kubernetes Engine API. This API is the interface through which you’ll interact with the GKE service.

In the Google Cloud Console, go to the API & Services page, then click on Enable APIs and Services. Here, you’ll find a search bar where you can type Kubernetes Engine API. Click on the API and then select Enable. This process may take a few minutes.

3. Install and Set Up gcloud CLI

The next step in mastering GKE is installing and setting up the Google Cloud Command Line Interface (gcloud CLI). This tool allows you to manage your GKE resources from your local machine’s command line.

To install the gcloud CLI, navigate to the Cloud SDK page in the Google Cloud Console and follow the instructions for your operating system. 

After installation, authenticate your account by executing the command gcloud auth login and follow the prompts. Then, set your default project with the command gcloud config set project [PROJECT_ID], replacing [PROJECT_ID] with the ID of your project.

4. Creating a GKE Cluster

Now that we have our Google Cloud account set up, the Kubernetes Engine API enabled, and the gcloud CLI installed, we can move on to creating our first GKE cluster.

In the Google Cloud Console, navigate to the Kubernetes Engine page and click on Create Cluster. Here, you’ll be able to configure your cluster’s settings, such as its name, location, and the number of nodes.

A GKE cluster consists of at least one cluster master and multiple worker machines called nodes. The cluster master manages the cluster, while the nodes run the actual applications. For most beginners, the default settings will suffice.

5. Deploying an Application

With our GKE cluster now set up, we can finally deploy an application. We’ll package the application into a container using Docker, and deploy it to the nodes of our GKE cluster.

First, you’ll need to create a Dockerfile for your application. This file contains the instructions for building the application’s container image. Once your Dockerfile is ready, use the docker build command to build the container image.

Next, upload your container image to the Google Container Registry (GCR) using the docker push command. The GCR is a private registry for your Docker images, accessible only by your Google Cloud account.

Finally, use the kubectl create command to create a new deployment on your GKE cluster. The deployment instructs Kubernetes to pull the container image from the GCR and run it on the cluster’s nodes.

And there you have it—you’ve just deployed your first application on Google Kubernetes Engine!