Introduction to Kubernetes: kubectl

Creating Python Microservices, Part 5

Over the past few installments, we’ve create a python microservice, dockerized the result, created a Container Registry and Set up the Azure Kubernetes Service. We now have a container serving a flask app to the internet from within a Kubernetes cluster. Let’s try out some Kubernetes features via kubectl.

Code for this can be found on GitHub. Or you can use this template as a starting point.

List the pods

You can list all the pods that are running:

kubectl get pods
=>
NAME                                               READY     STATUS             RESTARTS   AGE
demo-python-flask-deployment-7577b588bb-g6d9q      1/1       Running            0          5m

This tells us that we have one pod running out of one requested.

Look at the pod’s metadata

If it isn’t running (maybe it has a status like CrashLoopBackoff), you may want to start by looking at the pod’s metadata. The command describe pod will tell you where a pod is deployed, list the environment variables, give you a brief startup log, among other things:

kubectl describe pod <pod-name>
=>
...
Events:
  Type    Reason     Age   From                               Message
  ----    ------     ----  ----                               -------
  Normal  Scheduled  25m   default-scheduler                  Successfully assigned default/demo-python-flask-deployment-7577b588bb-g6d9q to aks-agentpool-38491153-0
  Normal  Pulling    25m   kubelet, aks-agentpool-38491153-0  pulling image "mydemoregistry.azurecr.io/pythondemo:0.0.1"
  Normal  Pulled     25m   kubelet, aks-agentpool-38491153-0  Successfully pulled image "mydemoregistry.azurecr.io/pythondemo:0.0.1"
  Normal  Created    25m   kubelet, aks-agentpool-38491153-0  Created container
  Normal  Started    25m   kubelet, aks-agentpool-38491153-0  Started container

You can also describe other things in kubernetes like a deployment or a service.

Check a pod’s logs

If the pod is running, your app will have likely generated some logs to stdout:

kubectl logs <pod-name>
=> 
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
# ... lots of startup logs ...
spawned uWSGI master process (pid: 10)
spawned uWSGI worker 1 (pid: 12, cores: 1)
spawned uWSGI worker 2 (pid: 13, cores: 1)
running "unix_signal:15 gracefully_kill_them_all" (master-start)...
2019-06-12 17:53:44,699 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-06-12 17:53:44,699 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[pid: 13|app: 0|req: 1/1] 10.240.0.4 () {42 vars in 742 bytes} [Wed Jun 12 17:57:36 2019] GET / => generated 28 bytes in 2 msecs (HTTP/1.1 200) 2 headers in 71 bytes (1 switches on core 0)
10.240.0.4 - - [12/Jun/2019:17:57:36 +0000] "GET / HTTP/1.1" 200 28 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.80 Safari/537.36" "-"
# ... more log entries omitted

We can see logs indicating that two uWSGI workers are running, and logs showing that I accessed the service from a Chrome browser.

Execute commands on a pod

Kubernetes configures pod access automatically—you don’t have to configure sshd or remote desktop—remote access is ready to go. You can open an interactive shell like this:

kubectl exec -it <pod-name> bash
=>

root@demo-python-flask-deployment-7577b588bb-g6d9q:/app# ls -al
total 52
drwxr-xr-x 1 root root 4096 Jun 10 15:29 .
drwxr-xr-x 1 root root 4096 Jun 12 17:53 ..
drwxr-xr-x 3 root root 4096 Jun  6 22:18 .pytest_cache
-rwxr-xr-x 1 root root 1364 Jun 10 15:22 Dockerfile
drwxr-xr-x 1 root root 4096 Jun 10 15:01 app
drwxr-xr-x 2 root root 4096 Jun 10 15:21 bak
drwxr-xr-x 3 root root 4096 Jun  6 22:20 mypkg
-rw-r--r-- 1 root root  206 May 17 02:55 prestart.sh
-rwxr-xr-x 1 root root   69 Jun  6 22:02 requirements.txt
drwxr-xr-x 5 root root 4096 Jun  6 22:36 tests
-rwxr-xr-x 1 root root   88 Jun 10 15:28 uwsgi.ini

… or you can execute a single command remotely:

kubectl exec <pod-name> printenv

Here’s how to execute a command with parameters:

kubectl exec <pod-name> -- bash -c "uname -a"
=>

Linux demo-python-flask-deployment-7577b588bb-g6d9q 4.15.0-1037-azure #39~16.04.1-Ubuntu SMP Tue Jan 15 17:20:47 UTC 2019 x86_64 GNU/Linux

Restart a pod

The easiest way to restart a pod is to just terminate it, then let the deployment restart it.

kubectl delete pod <pod-name>

Kubernetes is smart enough to know that when you are killing a pod, you probably are going to need another one to take its place. After you request the pod’s deletion, you can monitor the pods’ statuses with kubectl get pods --watch, and watch as it brings up a new pod before terminating the old one.

Scale Out

If someday we decide that our single pod is not sufficient to handle the incoming traffic, we can just add more pods. Fortunately this is easy to do—just tell the deployment that we need more replicas. Add replicas to the spec section of your deployment:

# ...
spec:
  replicas: 2  # <-- create two pods  
  selector:
    matchLabels:
      app: demo-python-flask
# ...

Then apply your changes and watch while kubernetes reconfigures your deployment:

kubectl apply -f .\deploy-demo.yml 
=>
deployment.apps "demo-python-flask-deployment" configured
service "pythondemo-flask-service" unchanged

kubectl get pods --watch
=>
NAME                                               READY     STATUS              RESTARTS   AGE
demo-python-flask-deployment-7577b588bb-g6d9q      1/1       Running             0          43m
demo-python-flask-deployment-7577b588bb-jb6ks      0/1       ContainerCreating   0          3s
...

After a few minutes, you’ll see that you now have two pods running side-by-side.

Remove our Microservice

kubectl delete will undo the service’s network configuration and delete the deployment and terminate its associated pods:

kubectl delete -f .\deploy-demo.yml

Change contexts

Ultimately you will probably have several different clusters. For the moment there’s only one, but you can verify that your current “context” is demo:

kubectl config current-context

=> demo

Next Steps

So far this works OK. If we stopped now, we could allocate a static IP address in our resource group, point a hostname to it, then point the service’s LoadBalancer to that address and we’d have a working web site. But if we were to publish this public endpoint like this, it would mean that the responsibility for security would be tied to the deployment of the microservice. Probably your microservice developers don’t want to have to deal with security and networking concerns like issuing SSL certificates or configuring firewalls and load balancers. In Part 6 we’ll look at how to separate the concerns of deploying the microservice from the concerns of making it publicly available so that your coders can concentrate on coding without getting bogged down in devops details.

Clean Up

When you’re done you can delete your registry and your cluster by deleting the resource groups ContainerRegistryDemo and KubernetesDemo.