All Collections
Accelerate Cloud Journey
VMware TKGI (formerly Enterprise PKS) Deep-dive
VMware TKGI (formerly Enterprise PKS) Deep-dive
Bill Call avatar
Written by Bill Call
Updated over a week ago

Table of Contents

Before you begin

Before you begin this walkthrough, ensure you are logged onto the VMware Tanzu desktop by following the instructions found here.

Introduction

In this module you will see how to operationize Kubernetes through VMware Enterprise PKS. What does that mean? Let's start by looking at what Kubernetes does well. It allows developers to easily deploy applications at scale. It handles the scheduling of workloads (via pods) across a set of infrastructure nodes. It provides an easy to use mechanism to increase availability and scale by allowing multiple replicas of application pods, while monitoring those replicas to ensure that the desired state (number of replicas) and the actual state of the application coincide. Kubernetes also facilitates reduced application downtime through rolling upgrades of application pods. PKS is providing similar capabilities for the Kubernetes platform itself. Platform engineering teams are becoming tasked with providing a Kubernetes "Dialtone" service for their development teams. Kubernetes is not a simple platform to manage, so the challenge becomes how to accomplish this without architect level knowledge of the platform. Through PKS, platform engineering teams can deliver Kubernetes clusters through a single API call or CLI command. Health monitoring is built into the platform, so if a service fails or a VM crashes, PKS detects that outage and rebuilds the cluster. As resources become constrained, clusters can be scaled out to relieve the pressure. Upgrading Kubernetes is not as easy as upgrading the application pods running on the cluster. PKS provides rolling upgrades to the Kubernetes cluster itself. The platform is integrated with the vSphere ecosystem, so platform engineers can use the tools they are familiar with to manage these new environments. Lastly, PKS includes licensed and supported Kubernetes, Harbor Enterprise Container Registry and NSX-T - and is available on vSphere and public cloud platforms.

That last paragraph sounded like a marketing message, so let's net this out. PKS gives you the latest version of Kubernetes - we have committed to constant compatibility with Google Container Engine (GKE), so you can always be up to date - an easy to consume interface for deploying Kubernetes clusters, scale out capability, Health Monitoring and automated remediation, Rolling upgrade, enterprise container registry with Notary image signing and Clair vulnerability scanning. All of this deployed while leveraging NSX-T logical networking from the VMs down to the Kubernetes pods. Let's jump in.

Our Lab Environment

During the course of this lab, we will deploy our own kubernetes cluster and spin up applications. Open the powershell on the desktop and login using the credentials for your TESTDRIVE account.

Login to PKS Cluster

After logging in to the RDSH, open Powershell. Use the following command to log into your PKS environment. The username and password are the same used for the TESTDRIVE account.

  1. Type pks login -a pks-api.vmwtd.com -u <username> -p <password> -k
    NOTE: Please use your PKS credentials as shown in the TestDrive page under PKS

1.png

Deploy a Kubernetes Cluster

Issue the following command in the same window to spawn a k8s cluster.

NOTE: The domain name "*.vmwdemo.int" should remain the same as this is the domain name for this environment. Please follow the following naming convention

  • Name: <username>-<num>
    example: eval-1 where eval is the username and 1 is the first cluster

  • External Hostname ( "-e" flag in the command ): <username>-<num>.vmwtd.com
    example: eval-1.vmwtd.com where eval is the username and 1 is the first cluster

  1. Type pks create-cluster <username>-1 -e <username>-1.vmwtd.com --plan small

NOTE: Please do NOT create more than 2 k8s clusters.

2.png

This will start the provisioning of the kubernetes cluster in the background. The creation and configuration of k8s Master and worker Nodes will take approximately 10 to 15 minutes to complete. You can check the status of the cluster by issuing the following commands. This is a shared PKS environment so other users will be using the setup in parallel. This command might take a little longer in case of heavy usage by other users.

  1. Type pks clusters
    This command will list down all the k8s clusters created by this user

  2. Type pks cluster <username>-1
    Display details for a specific k8s cluster. <username>-1 will be the cluster name

NOTE: Cluster creation takes around 20mins as multiple users are using this setup simultaneously. Please wait for cluster to show status as "succeeded" before moving to next section

3.png
4.png
5.png

Get k8s Cluster Credentials

provides a way to fetch k8s cluster credentials. Run the following command to fetch credentials for your newly created credentials.

  1. Type pks get-credentials <cluster-name>
    Fetch and populate kube-config file for the created cluster

  2. Type kubectl get pods --all-namespaces
    List all pods running in all namespaces

6.png

Connect to Kubernetes Dashboard

The easiest way to get to k8s dashboard is through kube proxy. We will run the proxy command on RDSH in powershell. This will redirect all traffic destined for 127.0.0.1:{port_number} on this jumphost host to the k8s Master Node. Run the following command to run kube proxy

  1. Type kubectl.exe proxy --port=0

7.png

Running the above command binds the k8s dashboard to an available port on localhost. In our case, this turned out to be 8001.

  • In Google Chrome, navigate to 127.0.0.1:{port_number}/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

8.png

The following screen will open. k8s allows authentication to UI using Kubeconfig file or Token. We will use kubeconfig file for authentication in this lab.

9.png

Navigate to C:\Users\<username>\.kube and choose the config file. Note that you have to use your username to locate your Kubeconfig file.

10.png
11.png
12.png
  • Click on Nodes

    You see that our cluster contains one worker node and they are consuming very little resource at this point. Your node names will be slightly different because the unique ID is generated with each new cluster creation. Let's drill in.

14.png
  • Click on the Name of a Node to view details.

13.png
15.png
16.png

Now you can get detailed information on your Node. Take some time to move around and get familiar with the information available through the dashboard. For those of you that have been involved with Kubernetes over the last year, you can see that the dashboard has become more and more useful with each release. We are now going to focus on the CLI so you can stop the "kubectl.exe proxy" command by pressing "Ctrl + c".

Cluster Scale, Health Monitoring and Troubleshooting

In this section we will see how PKS allows the addition of more resources to a cluster by scaling out the number of Worker nodes. We will test cluster resiliency by killing one of the nodes and dig into some Bosh commands to monitor and troubleshoot the cluster.

Scale Cluster With PKS

Let's get details on our previously deployed cluster again.

  1. Type pks cluster <cluster-name>
    Note that "Worker Instances" field shows 1

5.png

PKS allows clusters to be scaled out with a single CLI command.

  1. Type pks resize <cluster-name> -n 2
    'n' indicates the number of k8s worker nodes for a cluster. We are increasing the number of worker nodes of this k8s cluster from 1 to 2

    Note:
    This is a shared cluster so please limit the number of k8s worker nodes(-n) to below 5 so other users do not get effected

  2. pks cluster <cluster-name>
    Note that cluster is updating and worker instances shows 2

17.png

This command will cause a new worker node VM to be provisioned and the kubelet will be registered with the kubernetes master. It becomes very easy to add resources on demand.

Health Monitoring

PKS is leveraging BOSH to provision infrastructure. BOSH gets periodic heartbeats from the nodes/VMs it has provisioned and ensures that all required processes on these nodes are running. If BOSH detects a failure, it heals the nodes in the background in various ways. For e.g if a node/VM is not responding, BOSH will detach it's persistent disk, spin up a new VM, attach the persistent disk to this new VM and start the required processes on this Node. All this is done in the background and the issue is resolved without user interference.

We are going to use the Bosh CLI directly to monitor this activity.

Note: This is a shared lab so users are not given access to infrastructure resources like Bosh, OpsManager, NSX-T(ReadOnly), vCenter(ReadOnly) etc. Hence, HA tests involving disaster recovery scenarios are not part of this lab.

1. PKS Admins can monitor different tasks which are being performed in the background.

18.png
19.png

2. Check different VMs that make up this Deployment

20.png

Each Kubernetes cluster that we create is considered a Bosh deployment. A detailed discussion of Bosh is beyond the scope of this lab, but its important to know that the PKS api is abstracting calls to the underlying Bosh api

Additional Troubleshooting

In this environment, access to bosh is not provided because this is a shared PKS deployment.

Bosh provides commands for ssh into the cluster VMs and capturing the Kubernetes Log Files.

1. PKS admins can log into each k8s VMs for advanced troubleshooting

21.png

2. PKS Admins can collect logs for each k8s cluster or individual nodes for advanced debugging. The command below collects logs from a k8s cluster into a single tarball.

22.png

Persistent Volumes and Kubernetes Storage Policy

Although it is relatively easy to run stateless Microservices using container technology, stateful applications require slightly different treatment. There are multiple factors which need to be considered when handling persistent data using containers, such as:

  • Kubernetes pods are ephemeral by nature, so the data that needs to be persisted has to survive through the restart/re-scheduling of a pod.

  • When pods are re-scheduled, they can die on one host and might get scheduled on a different host. In such a case the storage should also be shifted and made available on the new host for the pod to start gracefully.

  • The application should not have to worry about the volume & data.The underlying infrastructure should handle the complexity of unmounting and mounting.

  • Certain applications have a strong sense of identity(e.g.;Kafka,Elastic)and the disk used by a container with certain identity is tied to it. It is important that if a pod with a certain ID gets re-scheduled for some reason then the disk associated with that ID is re-attached to the new pod instance.

  • PKS leverages vSphere Storage for Kubernetes to allow Pods to use enterprise grade persistent storage.

Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, iSCSI, VVol, VMFS or NFS datastores.

Kubernetes volumes are defined in Pod specifications. They reference VMDK files and these VMDK files are mounted as volumes when the container is running. When the Pod is deleted, the Kubernetes volume is unmounted and the data in VMDK files persists.

PKS deploys Kubernetes clusters with the vSphere storage provider already configured. In Module 4 you will upgrade an existing application to add persistent volumes and see that even after deleting your pods and recreating them, the application data persists. In order to use Persistent Volumes (PV) the user needs to create a Persistent Volume Claim (PVC) which is nothing but a request for PVs. A claim must specify the access mode and storage capacity, once a claim is created, PV is automatically bound to this claim. Kubernetes will bind a PV to PVC based on access mode and storage capacity but a claim can also mention volume name, selectors and volume class for a better match. This design of PV-PVCs not only abstracts storage provisioning and consumption but also ensures security through access control.

Static Persistent Volumes require that a vSphere administrator manually create a (virtual disk) VMDK on a datastore, then create a Persistent Volume that abstracts the VMDK. A developer would then make use of the volume by specifying a Persistent Volume Claim.

Dynamic Volume Provisioning

With PV and PVCs one can only provision storage statically i.e. PVs first needs to be created before Pod claims it. However, with the StorageClass API Kubernetes enables dynamic volume provisioning. This avoids pre-provisioning of storage and storage is provisioned automatically when a user requests it. The VMDK's are also cleaned up when the Persistent Volume Claim is removed.

The StorageClass API object specifies a provisioner and parameters which are used to decide which volume plugin should be used and which provisioner specific parameters to configure.

Create a Storage Class

Let's start by creating a Storage Class

23.png
  1. Type cd C:\PKS\apps

  2. Type cat redis-sc.yaml
    The yaml defines the vsphere volume and the set of parameters the driver supports. vSphere allows the following parameters:

    • disk format which can be thin(default), zeroed thick and eager zeroed thick

    • datastore is an optional field which can be VMFS Datastore or VSAN Datastore. This allows the user to select the datastore to provision PV from, if not specified the default datastore from vSphere config file is used

    • storagePolicyName is an optional field which is the name of the SPBM policy to be applied. The newly created persistent volume will have the SPBM policy configured with it

    • VSAN Storage Capability Parameters(cacheReservation,diskStripes,forceProvisioning, hostFailuresToTolerate, iopsLimit and objectSpaceReservation) are supported by vSphere provisioner for vSAN storage. The persistent volume created with these parameters will have these vSAN storage capabilities configured with it

  3. Type kubectl apply -f redis-sc.yaml
    Let's apply this yaml to create the storage class

  4. Type kubectl get sc

Create a Persistent Volume Claim

Dynamic provisioning involves defining a Persistent Volume Claim that refers to a storage class. Redis-slave-claim is our persistent volume claim and we are using the thin-disk storage class that we just created.

  1. Type cat redis-slave-claim.yaml
    Let's create our Persistent Volume Claim

  2. Type kubectl apply -f redis-slave-claim.yaml

24.png
  1. Type kubectl get pvc
    This shows that our Persistent Volume claim was created and bound to a Volume. The Volume is a vSphere VMDK. Let's look at it in more detail.

  2. Type kubectl describe pvc redis-slave-claim
    Here you can see that the provisioning of the volume succeeded. Let's go to vCenter in the next section and see the volume.

25.png

View The Volume in vCenter

26.png
  1. Connect to vcenter client and click on the Storage icon
    URL: https://vca-1.vmwtd.com/ui
    Username: pksdemo@vsphere.local
    Password: PKSdemo123!

  2. Select datastore: SL02SL257150-4

  3. Select the kubevols folder

  4. Here is the Persistent Volume you just created. Note that the volumeID in the kubectl describe maps to the vmdk name.

Also note that it was thin provisioned based on the storage class specification we used. You will see how to mount this volume in your pod as part of Module 4.

NSX Network and Security Policy

PKS includes software defined networking with NSX. NSX supports logical networking from the Kubernetes cluster VMs to the pods themselves providing a single network management and control plane for your container based applications. This section will not be an exhaustive look at all of the NSX Kubernetes integration - for that check our VMware Hands on Lab(HOL) 1827 - but will focus on a few examples. Also, this section assumes some knowledge of kubernetes, kubectl and yaml configuration files. For an intro into some of that, you might want to take modules 3 and 4 of this lab before tackling the networking and security.

Namespaces

PKS deployed clusters include an NSX system component that is watching for new namespaces to be created. When that happens, NSX creates a new Logical Switch and Logical Router, and allocates a private network for pods that will later be attached to that switch. Note that the default is to create a NAT'd network, however you can override that when creating the namespace to specify a routed network. Let's see what happens when we create a namespace.

Create Namespace

27.png

We will now create a new namespace and set the context so that the cli is pointed to the new namespace. Return to the cli-vm putty session you were using earlier.

  1. Type kubectl create namespace guestbook

  2. Type kubectl get namespace

  3. Type kubectl config set-context <cluster-name> --namespace guestbook

This command changes the context for kubectl so that the default namespace to be used is the new guestbook namespace. It keeps you from having to specify the namespace on each command

View New Objects With NSX Manager

28.png
  1. Click on Google Chrome Browser

  2. Navigate to URL: https://nsxtmgr-1.vmwtd.com/nsx/#/app/advanced
    Username: audit
    Password: gpe@uD1T1!!

  3. Click Log in

View Logical Router Created Automatically

29.png
  1. Click on Routing

  2. Click on T1 Router created for the guestbook namespace

There are T1 routers created for each of our namespaces and the guestbook T1 router was automatically added when we created the Namespace. If you click on Switching you would see a similar list of Logical Switches. When pods are deployed, Ports are created on the appropriate switch and an IP from the pool is assigned to the pod.

Kubernetes Network Policy and Microsegmentation

Using Network Policy, users can define firewall rules to allow traffic into a Namespace, and between Pods. The network policy is a Namespace property. Network Admins can define policy in NSX through labels that can then be applied to pods. Here we will show how the Kubernetes Network Policy definition causes the firewall rules to be automatically generated in NSX. By default, pods are non-isolated; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic. In our case, we will add a policy to only allow access to our guestbook app from pods in a namespace with label app: redis.

Create Network Policy

30.png

We will first check that there are no Network Policies created for this Namespace

  1. Type kubectl get NetworkPolicy
    Next we look at the network policy we want to create. This one establishes a rule about connectivity to pods with label app:nginx from pods with app: db. The default is to DROP traffic. So this means db pods will not be able to connect with nginx pods

  2. Type cat nsx-demo-policy.yaml
    Let's apply that network policy. Notice that ingress from 0.0.0.0/0 is commented out which means that we should not be able to reach to the Guestbook app from our RDSH host as NSX will drop that traffic.

  3. Type kubectl apply -f nsx-demo-policy.yaml
    Let's see what we created

  4. Type kubectl get NetworkPolicy

View Firewall Rules Created Automatically

31.png

From NSX-Mgr we can see that rules have been created based on our policy. NSX has dynamically created Source and Destination security groups and will apply the right policy.

  1. Click on Firewall

  2. Note the Network Policy Name and the scope being the Namespace we created it from.

Traceflow

NSX provides the capabilty to do detailed packet tracing across VMs and between pods. You can tell where a packet might have been dropped between two pods that you have deployed. We will deploy two pods: one labeled app=nginx and one labeled app=db. Our network policy should prevent communication between the two. Let's create the pods.

32.png
  1. Type kubectl apply -f guestbook-all-in-one.yaml

Retrieve the names of the PODs for the next section

Screen_Shot_2018-03-16_at_2.25.23_PM.png
  1. Type kubectl get pods

  2. Note down POD name of Redis Master POD
    In the example above, the name of Redis Master POD is "redis-master-6685dc5cfc-m5rfj". This name will be different for you. This will be used in the next section

  3. Note down the POD name of frontend pod
    In the example above, the name of Frontend POD is "frontend-8657d5d8f9-fklmg". This name will be different for you. This will be used in the next section

Configure Traceflow Source

Return to NSX-Mgr in the Browser

33.png
  1. Click on Tools

  2. Select Traceflow

  3. Under Source, choose the Logical Port and paste the name of the Redis POD(as described in the previous section) to find the port for your Redis POD

  4. Under Destination, choose Logical Port and paste the name of Frontend POD(as describe in the previous section) to find the port for your Frontend POD

  5. Click Trace

Verify Packets Are Dropped

34.png
  1. The packet was dropped by the firewall.

Let's re-apply network policy which is allowing all ingress traffic

Updated Network Policy

35.png
  1. Type cat nsx-demo-policy-allow.yaml
    Note that this policy is similar to our deployed policy(name and match criteria) except we have changed the ingress rule to allow all

  2. Type kubectl apply -f nsx-demo-policy-allow.yaml

Re-Trace Your Application

36.png
  1. Click the Re-Trace button

  2. Once the network policy was updated, the packet made it to its destination successfully.

Traceflow is a very powerful capability that can also trace traffic flow from VM to pod, VM to VM, and IP to IP. Try out a few more traces on your own.

Accessing the Guestbook App

37.png
  1. Type kubectl get svc
    This command will return the services running in this namespace. NSX-T has a native Load Balancer as a result of which we can directly reach the Guestbook App from the RDSH.

Cleanup Deployments

38.png
  1. Type kubectl delete -f nsx-demo-policy-allow.yaml

  2. Type kubectl delete -f guestbook-all-in-one.yaml

  3. Type kubectl config set-context <username>-1 --namespace default

  4. Type kubectl delete namespace guestbook

Delete network policy, guestbook app and namespace, and returns the kubectl context to the default namespace.

Harbor Enterprise Container Registry

The application deployments in this lab make use of a private container registry. We are using software from a VMware open source project called Harbor as our registry. Harbor is included as an enterprise supported product with VMware Enterprise PKS. In this section, you will become familiar with the core capability of Harbor. You will create a project and see how to push and pull images from the repos. You will also enable content trust so that images are signed by the publisher and only signed images may be pulled from the project repo. You will also be introduced to the vulnerability scanning capability of Harbor. Most organizations will use a private registry rather than public Docker hub to improve security and latency for their applications. Although Harbor can be deployed as a highly available application, we have not done that for this lab.

Login to Harbor UI

Screen_Shot_2019-08-23_at_8.59.36_AM.png
40.png
  1. Click on Google Chrome

  2. Go to harbor.vmwtd.com

  3. Login to Harbor by creating a new account in Sign up

View Projects and Repositories

Harbor organizes images into a set of projects and repositories within those projects. Repositories can have one or more images associated with them. Each of the images are tagged. Projects can have RBAC (Role Based Access Control) and replication policies associated with them so that administrators can regulate access to images and create image distribution pipelines across registries that might be geographically dispersed. You should now be at a summary screen that shows all of the projects in this registry.

41.png

The library project contains six repositories and has no access control. it is available to the public.

  1. Click on library to see the repos

You now see six different repos. The restreview repos will be used in Module 4 to deploy our restaurant review application.

View Restreview-ui Repo Images

42.png

1. Click on the library/restreview-ui repo

View Image Vulnerability Summary

43.png

Notice that there are two images. During lab preparation two versions of the same image were uploaded so that we could upgrade our application in Module 4. Vulnerability scanning is part of PKS deployed Harbor registry.

Click on either of the images to see its vulnerability threat report.

View Image Vulnerability Report

44.png

Each vulnerability has details, along with the package containing it, and the correct package version to fix the vulnerability.

For more information on Harbor, please visit https://github.com/vmware/harbor

Did this answer your question?