Kubernetes Deep-dive
Bill Call avatar
Written by Bill Call
Updated over a week ago

Table of Contents

Before You Begin

Before you begin this walkthrough, ensure you are logged onto the VMware Enterprise PKS desktop by following the instructions found here. If you ran out of time and coming back to this lab later, your clusters might be deleted so you need to authenticate and recreate k8s cluster as mentioned in VMware Enterprise PKS - Quick Start Guide.

Your Lab Kubernetes Cluster

The command line tool used to interact with Kubernetes clusters is kubectl. If you took module 2 of the lab, you have some familiarilty with using kubectl. We will dive deeper here. While you can use curl and other programs to communicate with Kubernetes at the API level, the kubectl command makes interacting with the cluster from the command line easy, packaging up your requests and making the API calls for you. In this section you will become familiar with some of the basic kubectl commands and get comfortable with a few of the constructs we described in the overview section. You will focus on system level components before moving on to applications. Module 2 already deployed a k8s cluster. The cluster contains 3 nodes - one master and two workers. Let's take a look at what we have deployed.

Note: This Lab assumes that k8s cluster creation was already performed in Module 2. If you are starting with this Module and have not gone through lab 2, please perform the following steps from Module 2 before continuing.

  1. Login to PKS Cluster

  2. Deploy a Kubernetes Cluster

  3. Get Kubernetes Cluster Credentials

Check Cluster Components

2-1.png

Let's start getting familiar with using the Kubernetes CLI. You will start using the "get" command to view system level components of your Kubernetes cluster.

  1. Type kubectl get nodes
    View the availability of each of the nodes in your cluster and verify that each node is in "Ready" status.

  2. Type kubectl get cs
    View the status of the system components. The scheduler is responsible for placement of pods on nodes and etcd stores all of the persistent state for the cluster. Verify that all components are "Healthy".

  3. Type kubectl get pods --namespace=kube-system
    Kubernetes can run its system services as pods. With PKS deployed clusters, the master components run as processes managed by Bosh. Some of the supporting services run as pods. Let's take a look at those pods. Heapster aggregates cluster-wide monitoring and event data. The data is then pushed to influxdb for backend storage. Kubernetes also provides its own internal DNS server. This is used to provide domain names for communication between Kubernetes services. The Dashboard is the Kubernetes Management UI.

  4. Type kubectl get pods --namespace=kube-system -o wide
    The -o wide option to get pods provides more information for you. Note that this option is available on many commands to expand the output. Try it out. Notice that you see the IP address associated with each pod. Kubernetes network architecture expects that all pods can talk to each other without NAT. There are many ways to accomplish this. In our lab we have implemented NSX-T to provide logical networking. NSX-T is a new version of NSX that implements overlay networking down to the container level and is included with PKS.


That's it for the system services. Let's move on to Namespaces.

Namespaces and CLI context

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespace provides a scope for a name. Names of resources need to be unique within a namespace, but not across namespaces. They are a way to divide cluster resources between multiple users. As Kubernetes continues to evolve, namespaces will provide true multi-tenancy for your cluster. They are only partially there at this point. You can reference objects in a namespace by applying command line label/selector and permanently by setting the context for your environment. You will do both in this section.

Before interacting with your cluster you must configure kubectl to point to your cluster and provide the namespace, along with any authentication needed. In our case, we created the context in the last section using "pks get-credentials" command. That command updated the file C:\Users\<username>/.kube/config to hold the kubectl configuration info. By setting up the config file, you remove the need to include that information on each kubectl command. The cluster config names the cluster and points kubectl to a specific certificate and API Server for the cluster.

Verify Config Is Correct Directly In Config File

2-2.png

The set-context command creates a config file that is used by kubectl to interact with the cluster. Our file is much simpler as we only have one cluster with one namespace. In production environments you might see key or certs, as well as specific user and cluster settings that explicitly define the context for how to interact with a particular cluster. View the contents of the config file.

  1. Type cat C:\Users\<username>\.kube\config in powershell
    Note: Replace <username> with your TestDrive username

Verify Config With kubectl

2-3.png

You don't actually have to cat the config directly to see the configuration. kubectl provides a command to do that.

  1. Type kubectl config view

Namespaces

Let's take a look at the namespaces in our cluster. What we care about for this lab are the kube-system and default namespaces. As we have previously seen, kube-system contains the Kubernetes cluster system objects. Default will be where we are deploying our applications.

  1. Type kubectl get namespaces

2-4.png

Now we will see how the namespaces label selector changes the output of the get commands.
Our current context is default, and you have not created any application pods yet. So no resources are found. As you saw previously, the kube-system namespace is running the Kubernetes cluster system services.

  1. Type kubectl get pods

  2. Type kubectl get pods --namespace=kube-system

2-5.png

Deployments, Pods and Services Deployments, Pods and Services

So far you have interacted with your Kubernetes cluster in the context of system services. You looked at pods that make up kube-system, set your CLI context and got some familiarity with CLI constructs. Now you will see how these relate to actually deploying an application. First a quick review on a couple of Kubernetes object definitions.

• Pod - A group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. A pod's contents are always co-located and co-scheduled, and run in a shared context.

• Service - Kubernetes pods are ephemeral. When they die, they are recreated - not restarted. Replication controllers in particular create and destroy pods dynamically (e.g. when scaling up or down or when doing rolling updates). While each pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of pods (lets call them backends) provides functionality to other pods (lets call them frontends) inside the Kubernetes cluster, how do those frontends Und out and keep track of which backends are in that set? A Kubernetes Service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a microservice.
The set of pods targeted by a Service is (usually) determined by a Label Selector. Not only does a service provide discovery of the underlying pods, but handles East/West Load Balancing across them through the Kube-Proxy process running on each Node.

• Deployment - Provides declarative updates for pods and replica sets (the next-generation replication controller). You only need to describe the desired state in a deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define deployments to create new replica sets, or remove existing deployments and adopt all of their resources with new deployments.
Just a reminder that Module 1 of this lab goes into a more detailed explanation of these
components.

Defining Desired Application State

Navigate to the folder with different k8s application manifest files in powershell

2-6.png
  1. Type cd C:\PKS\apps
    This command will change the current working directory so that we can access the application manifest files in the subsequent sections

View Yaml Details

Central to Kubernetes are the process control loops that attempt to continuously reconcile the actual state of the system with the desired state. The desired state is defined in object specifications that can be presented to the system from yaml or json specification files. You are going to deploy a simple nginx web server. The yaml file specification will create a Deployment with a set of pods and a service. Let's see how that works.

2-7.png
  1. Type cat nginx.yaml
    This will print the contents of the file "nginx.yaml" on PowerShell

Yaml: Deployment Spec

2-8.png

Let's break apart the components of this file.

  1. Every specification includes the version of the API to use. The first spec is the deployment, which includes the "PodSpec" and replica set.

  2. The deployment name is nginx
    Notice that it has a Label, app: nginx. Labels are key:value pairs that are used to specify identifying attributes of objects and are used extensively in Kubernetes for grouping. You will see one example with the service creation in the following steps.

  3. Replicas specifies the desired state for the number of pods defined in the spec section that should be running at one time. In this case, 3 pods will be started. (Note: the scheduler will attempt to place them on separate nodes for availability but its best effort)

  4. The pods also get their own label. This is used for, among other things, service Endpoint discovery

  5. This pod is made up of a single container that will be instantiated based on the nginx:V1 image stored in the harbor.vmwdemo.int private registry

  6. The container will expose port 80. Note that this is the container port, not the host port that provides external access to the container. More on that in a minute.

Yaml: Service Spec

The next spec is for the service. In addition to the name and label, the spec itself has two very important components:

  1. Type: LoadBalancer
    By specifying LoadBalancer, NSX will create a Logical Load Balancer and associate an External IP to provide access to the service. Access to services internal to the cluster - like a frontend webserver trying to update a backend database are done via a clusterIp and/or internal DNS name. The internal DNS name is based on the name defined for this service.

  2. Selector: app:nginx
    This is the label that the service uses to get the pods that it routes to.

Deploy Nginx Application

2-9.png

The nginx.yaml defines the desired state for the deployment of this application, but we haven't defined what it actually does. Nginx is an application that can act as a Web Server or reverse proxy server. You will deploy the application, look at its running components and verify that the web server is running through your browser.

  1. Type kubectl create -f nginx.yaml

  2. Type kubectl get deployment
    Notice that the nginx deployment has a desired state of three pods and the current state is three running pods.

  3. Type kubectl get pods
    Notice that you have three running pods. Try the -o wide option to see which nodes they are on and their internal IP address.

View the Service for Nginx

We have running pods, but no way to access the service from our network. Remember that the pod IP addresses are private to the cluster (actually we break that rule because of the lab setup, generally this will be true). Also, what happens if the replication controller has to restart one of them and the IP changes. So we need the service to discover our application endpoints

2-10.png
  1. Type kubectl get svc
    Notice that the Service has a clusterIP. This is an internal IP. Generally you would not be able to access the service through this IP unless you are another service internal to the cluster. NSX has created a load balancer and allocated an external IP (192.168.24.15) that allows you to access the service and be routed to your service endpoints (pods). Your external IP may be different.

Access Nginx Web Server

2-11.png
  1. Click on Google Chrome

  2. Enter http://192.168.24.15 or whatever External IP you saw in the previous command

If you see the "Welcome to nginx" page, your Web Server is running.

Replica Sets and Labels

2-12.png

As discussed previously with services, the labels are very important for Kubernetes to group objects. Let's see how that works with replica sets.

  1. Type kubectl get rs -o wide

  2. Type kubectl get pods -l app=nginx
    Notice the selector is based on the app=nginx label. So pods with that label are monitored for restart based on this replica set.

Scale our Application Up

Applications may need to be scaled up or down to improve performance or availability. Kubernetes can do that with no application downtime by adding or removing pods. Remember that the success of scaling is dependent upon the underlying application's ability to support it.
Let's scale our deployment and see what happens. Remember that scaling is changing the desired state for our app, and the replication controller will notice a difference between desired state and current state, then add replicas.

2-13.png
  1. Type kubectl scale deployment nginx --replicas 5

  2. Type kubectl get pods -l app=nginx

You may have to execute get pods more than once to see the new running pods, but you have gone from an application that had three copies of the nginx web server running to five replicas. The service automatically knows about the new endpoints and nsx-kube-proxy has updating the control Vows to provide internal load balancing across the new pods. Pretty cool!!

Scale our Application Back Down

You can also remove unneeded capacity by reducing the number of replicas in your deployment.

2-14.png
  1. Type kubectl scale deployment nginx --replicas 2

  2. Type kubectl get pods -l app=nginx

Delete Our Application

Now let's delete our deployment. Its very simple. Just reference the same spec file you used to create the deployment.

2-15.png
  1. Type kubectl delete -f nginx.yaml

Did this answer your question?