New DZone publication: Serverless with Fn Project on Kubernetes

Today I realised that my Serverless with Fn on Kubernetes article was published on DZone. That is great news. Not sure why, but I didn’t pay too much attention to DZone but realised lately that so many good content is published there. E.g. check out the refcards!

Serverless with Fn Project on Kubernetes for Docker (Mac)

Docker for Mac

Last week I deployed Fn Project on Kubernetes as a quick smoke test. Fn is the new serverless platform that was open sourced at Java One 2017. Running it on Kubernetes is easier than ever because Docker directly supports Kubernetes now, as announced at the last DockerCon. In the end it just worked without any issues.

To reproduce the steps, first of all make sure the latest version of Docker with Kubernetes support is installed properly and Kubernetes is enabled (in my case this is 17.12.0-ce-mac45 from the edge channel) .

Prerequisites and Checks

List the images of running Docker containers. This should show you the containers required for K8s if you enabled it in the Docker console under preferences:

$ docker container ls --format "table{\t{{.Image }}}"

Next, check if there are existing contexts. For example I have minikube and and GKE configured as well. Make sure the * (astericks) is set to docker-for-desktop:

$ kubectl config get-contexts
CURRENT   NAME                                         CLUSTER                                      AUTHINFO                                     NAMESPACE
*         docker-for-desktop                           docker-for-desktop-cluster                   docker-for-desktop                           
          gke_fmproject-194414_us-west2-a_fm-cluster   gke_fmproject-194414_us-west2-a_fm-cluster   gke_fmproject-194414_us-west2-a_fm-cluster   
          minikube                                     minikube                                     minikube                                  

If it is not set correctly, you can point kubectl to the correct Kubernetes cluster with the following command:

$ kubectl config use-context docker-for-desktop

Also you can see the running nodes:

$ kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    9d        v1.8.2

Check out the cluster, it just consists of a single node:

$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

Setup

To get better visibility into K8s I recommend to install the Kubernetes Dashboard:

$ kubectl create -f 
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

The dashboard is running in the kube-system namespace and you can check this with the following command:

$ kubectl get pods --namespace=kube-system

Enable Port Forwarding for the dashboard

Enable port forwarding to port 8443 with the following command and make sure to use the correct pod name:

$ kubectl port-forward kubernetes-dashboard-7798c48646-ctrtl 8443:8443 --namespace=kube-system

With a web browser connect to https://localhost:8443. When asked, allow access for the untrusted site and click on “Skip”.

Alternative to Port Forward: Proxy

Alternatively you could access it via the proxy service:

$ kubectl proxy

Then use the following URL with the browser

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Microservice smoke test

The following steps are not necessary to run Fn project. However, I first deployed a small microservice to see if Kubernetes was running fine for me on my Mac. Feel free to skip that entirely. To copy what I did, you could follow the steps for load balancing a microservice with K8s

Fn on Kubernetes

Helm

Make sure your Kubernetes cluster is up and running and working correctly. We will use the K8s package manager Helm to install Fn.

Install Helm

Follow the instructions to [install Helm(https://docs.helm.sh/using_helm/#installing-helm) on your system, e.g. on a Mac it can be done with with brew. Helm will talk to Tiller, a deployment on the K8s cluster.

Init Helm and provision Tiller

$ helm init
$HELM_HOME has been configured at /Users/frank/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Install Fn

You can simply follow the instructions about installing Fn on Kubernetes. I put the steps here for completeness. First, let’s clone the fn-helm repo from github:

$ git clone https://github.com/fnproject/fn-helm.git && cd fn-helm

Install chart dependencies (from requirements.yaml):

$ helm dep build fn

Then install the chart. I chose the release name fm-release:

$ helm install --name fm-release fn

Then make sure to set the FN_API_URL as described in the output of the command above.

This should be it! You should see the following deployment from the K8s console.

Try to run a function. For more details checke the Fn Helm instruction on github.

Summary

Installing Fn on K8s with Helm should work on any Kubernetes cluster. Give it a try yourself, code some functions and run them on Fn / Kubernetes. Feel free to check out my Serverless slides.

Analytics and Data Summit 2018: Serverless and Machine Learning + Open Source Big Data in the Cloud

The year has just started and here is the first “good news” yet: My presentation about “Serverless Architectures and Machine Learning” was accepted for the Analytics and Data Summit 2018 (former BIWA conference). The presentation will include a live demo with Fn Project.

In addition to that I will give another presentation together with Edelweiss Kammermann about Open Source Big Data (with Hadoop, Hive, Spark and Kafka live demos) in the Cloud. IMHO, two fabulous topics – I am looking forward to see you there!

A Serverless / FaaS Classification

At the time of writing there are more than a dozen FaaS frameworks or platforms available. These frameworks or platforms can be classified into three different categories based on their objective and reach.

The three categories are as follows:

  1. Complexity:
    Reduce the complexity of a particular vendor’s cloud based FaaS implementation, e.g. the configuration of the API gateway and access management that is required for a REST based serverless function. A typical example for this category: AWS Chalice.
  2. Portability:
    Provide an abstraction framework for portability and ease of use on top of the FaaS implementation of various public cloud providers. A popular example is the serverless.com framework.
  3. Standards:
    Provide a standard based, serverless platform or framework to abstract running functions from the operation of servers. These frameworks are typically developed without a particular cloud provider in mind. When running such a framework on top of IaaS, servers are abstracted away, automated scaling is possible, but no true per invocation is achieved due to the IaaS pricing model. Examples for this category are Open FaaS, and Fn Project.

 

Fn Project in Public Clouds (aka Serverless on IaaS ?)

Fn in Public Clouds (IaaS)

Fn project is a cloud agnostic FaaS platform and a common question is how to use Fn in public clouds. Similar to the local installation that we used in the Oracle blog posting (link soon), it can also be installed on any public cloud IaaS. For most IaaS it is enough to pass the installation command directly to the creation of a compute instance as so called user-data. User data is commands that are acted upon when the instance is provisioned. Also when running Fn in a public cloud, don’t forget to enable access rules for Fn server allowing port 8080 – depending on requirements – either from your own IP or all public IP addresses.

Once Fn server is running on your favourite cloud provider, you could deploy the recommendation engine mock example mentioned in the posting above in two different ways.

Deploy Your Fn Function in the Cloud

# example 1 (for teaching purpose only, in production use approach below)
# note: run these commands on the cloud instance

$ fn apps create advtravel
$ fn routes create advtravel /fn-recommend DOCKER_ID/recommend:0.0.2

 

Another, probably even more useful way to deploy the function is to set the FN_API_URL environment variable locally, point it to the remote cloud instance, and run the local Fn deploy command against the remote cloud instance.

# example 2 (easier, what you'd do in real life)
$ export FN_API_URL=URLofRemoteCloudInstance
$ fn deploy --app advtravel 

Note, that with the commands above you never had copy over the function or the container image to the cloud instance. When the function will be invoked the first time, Fn will pull the Docker container from the registry, store it locally, and then simply run the function.

Test Fn in the Cloud

Once the Fn is running in the cloud and your application is deployed you can access the application from a local machine using the command-line or Postman. The invocation is the same as in the local example, just replace localhost with the public IP address of your cloud instance:

$ curl -X POST --data @testdata/syd.json PUBLIC_IP:8080/r/advtravel/fn-recommend 

Real FaaS?

Obviously, when running Fn project on an IaaS you do not get the true pay per invocation benefit as for a FaaS implemented by the cloud provider as PaaS. You will get automated scalability to some degree, since it is built into the load balancer Fn LB. At the end running Fn on IaaS is only serverless from a user perspective.

It will be interesting to see if a cloud platform (most likely Oracle, since Fn Project is driven a lot by Oracle) will provide a proper FaaS service with pay per invocation and automated scalability that is compatible with open source Fn Project.

Fn Cloud Demo

A recorded live demo from Devoxx conference about deploying Fn on IaaS can be seen here.

Get the demo app used in webcast from here.