Kubernetes Tipps, Tricks and Reads

Kubernetes Tidbits

Let’s face it. Most of us are not using the Kubernetes CLI every day. Some might even argue that factual knowledge is overrated. This posting is more a reminder for myself; I’d like to list some little helpers that help to improve your Kubernetes command-line skills:

Kubernetes Contexts

Show all available contexts (e.g. Minikube, GKE, Oracle Wercker):

$ kubectl config view --minify

YAML Output

Get the output of a deployment as more readable as YAML.

$ kubectl get deployment my-nginx -o yaml

Set custom namespace as default

Set default namespace, e.g. when working in a shared cluster.

$ kubectl config set-context $(kubectl config current-context) --namespace=XYZ
# Validate it
$ kubectl config view | grep namespace:

Other resources

I will add more here for sure. These days I am using K8s a lot 🙂 Also check out the following resources:

To be continued

Kubernetes from the kubectl Command Line

Oracle Container Engine (OCE)

You most likely read the news that Oracle joined the CNCF and now offers a Kubernetes service named Oracle Container Engine. Basically you could use OCE nicely integrated with the CI/CD Wercker or alternatively from the command line.

OCE with Wercker

About using Kubernetes together with Wercker I will present at the CODE conference in NY city. So stay tuned for slides and possibly a recording.

OCE with standard kubectl CLI

OCE is a standard upstream Kubernetes. So with an existing kubectl client that is correctly pointing to your OCE instance you can try your first K8s steps from the CLI. So here is a quick primer.

The first thing to note is that you should set your namespace if you are using the OCE trial. The reason is, that there is shared cluster for trial and different users are assigned different namespaces. Don’t worry, if you are following the example with Wercker, given to the trial participants the namespace will be set correctly.

Set your namespace, replace fmunz with your namespace in the command below:

$ kubectl config set-context $(kubectl config current-context) --namespace=fmunz

Create a pod and scale it to 3 and :

$ kubectl run microg --image=fmunz/microg --port 5555
$ kubectl scale --replicas=3 deployment/microg

Note, that so far you only have a pod running, but no service. So your container will not be reachable from the outside. Now expose it as a service via the NodePort:

$ kubectl expose deployment microg --type=NodePort

Maybe most difficult question is how to access the service. You can find the NODE IPs of the pods using the wide flag when retrieving information about the pods:

$ kubectl get pods -o wide

NAME                                  READY     STATUS    RESTARTS   AGE       IP              NODE
microg-858154966-bfhg9                1/1       Running   0          2h        10.244.56.146   129.213.30.58
microg-858154966-k4v07                1/1       Running   0          2h        10.244.93.146   129.213.58.116
microg-858154966-p1tn9                1/1       Running   0          2h        10.244.99.40    129.213.36.50

Pick one of the NODE IPs, e.g. 129.213.30.58. Next, retrieve the NodePort that was created.

$ kubectl describe service microg | grep NodePort
Type:                     NodePort
NodePort:                 <unset>  32279/TCP

Now you can simply combine port and IP for the URL to access the service:

$ curl -s 129.213.30.58:32279

Which will show the following service response:

{"date":"Tuesday, 27-Feb-18 15:12:47 UTC","ip":"10.244.99.40","rel":"v1.0","cnt":174}

Hit it a couple more more times to investigate the load balancing.

I will be speaking at CODE in New York!

Stay tuned for more details, but my presentation about Kubernetes was accepted for the CODE conference 2018 in New York City, March 8th. That is of course fantastic news 🙂

In more detail: I will present about the evolution of containers. From Docker, to Swarm, to container orchestrations systems, Kubernetes and managed Kubernetes (such as Oracle Container Engine or others). At the end I guess you will agree that Kubernetes is great and getting better every day, but you won’t like to manage your own Kubernetes cluster. Interesting enough, Bob Quillin summarised my CODE presentation as the new Oracle strategy really well.

Oracle CODE New York

Of course we will have a lot fun fun live coding with Mini, the Raspi cluster again. I plan to demo the setup of the cluster, service deployment, load balancing, failover etc. All this live on stage with hopefully a really big screen for the projection.

New DZone publication: Serverless with Fn Project on Kubernetes

Today I realised that my Serverless with Fn on Kubernetes article was published on DZone. That is great news. Not sure why, but I didn’t pay too much attention to DZone but realised lately that so many good content is published there. E.g. check out the refcards!

Serverless with Fn Project on Kubernetes for Docker (Mac)

Docker for Mac

Last week I deployed Fn Project on Kubernetes as a quick smoke test. Fn is the new serverless platform that was open sourced at Java One 2017. Running it on Kubernetes is easier than ever because Docker directly supports Kubernetes now, as announced at the last DockerCon. In the end it just worked without any issues.

To reproduce the steps, first of all make sure the latest version of Docker with Kubernetes support is installed properly and Kubernetes is enabled (in my case this is 17.12.0-ce-mac45 from the edge channel) .

Prerequisites and Checks

List the images of running Docker containers. This should show you the containers required for K8s if you enabled it in the Docker console under preferences:

$ docker container ls --format "table{\t{{.Image }}}"

Next, check if there are existing contexts. For example I have minikube and and GKE configured as well. Make sure the * (astericks) is set to docker-for-desktop:

$ kubectl config get-contexts
CURRENT   NAME                                         CLUSTER                                      AUTHINFO                                     NAMESPACE
*         docker-for-desktop                           docker-for-desktop-cluster                   docker-for-desktop                           
          gke_fmproject-194414_us-west2-a_fm-cluster   gke_fmproject-194414_us-west2-a_fm-cluster   gke_fmproject-194414_us-west2-a_fm-cluster   
          minikube                                     minikube                                     minikube                                  

If it is not set correctly, you can point kubectl to the correct Kubernetes cluster with the following command:

$ kubectl config use-context docker-for-desktop

Also you can see the running nodes:

$ kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    9d        v1.8.2

Check out the cluster, it just consists of a single node:

$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

Setup

To get better visibility into K8s I recommend to install the Kubernetes Dashboard:

$ kubectl create -f 
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

The dashboard is running in the kube-system namespace and you can check this with the following command:

$ kubectl get pods --namespace=kube-system

Enable Port Forwarding for the dashboard

Enable port forwarding to port 8443 with the following command and make sure to use the correct pod name:

$ kubectl port-forward kubernetes-dashboard-7798c48646-ctrtl 8443:8443 --namespace=kube-system

With a web browser connect to https://localhost:8443. When asked, allow access for the untrusted site and click on “Skip”.

Alternative to Port Forward: Proxy

Alternatively you could access it via the proxy service:

$ kubectl proxy

Then use the following URL with the browser

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Microservice smoke test

The following steps are not necessary to run Fn project. However, I first deployed a small microservice to see if Kubernetes was running fine for me on my Mac. Feel free to skip that entirely. To copy what I did, you could follow the steps for load balancing a microservice with K8s

Fn on Kubernetes

Helm

Make sure your Kubernetes cluster is up and running and working correctly. We will use the K8s package manager Helm to install Fn.

Install Helm

Follow the instructions to [install Helm(https://docs.helm.sh/using_helm/#installing-helm) on your system, e.g. on a Mac it can be done with with brew. Helm will talk to Tiller, a deployment on the K8s cluster.

Init Helm and provision Tiller

$ helm init
$HELM_HOME has been configured at /Users/frank/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Install Fn

You can simply follow the instructions about installing Fn on Kubernetes. I put the steps here for completeness. First, let’s clone the fn-helm repo from github:

$ git clone https://github.com/fnproject/fn-helm.git && cd fn-helm

Install chart dependencies (from requirements.yaml):

$ helm dep build fn

Then install the chart. I chose the release name fm-release:

$ helm install --name fm-release fn

Then make sure to set the FN_API_URL as described in the output of the command above.

This should be it! You should see the following deployment from the K8s console.

Try to run a function. For more details checke the Fn Helm instruction on github.

Summary

Installing Fn on K8s with Helm should work on any Kubernetes cluster. Give it a try yourself, code some functions and run them on Fn / Kubernetes. Feel free to check out my Serverless slides.