en
de
How Kubernetes, Kong, and Docker simplify microservices for everyone.

Even your boss can deploy microservices.

Deployment is for everyone

At our recent Zuhlke UK training camp, Wolfgang (UK CEO) wanted a better understanding of working with microservices in a modern cloud environment, so he asked Kevin, one of our Lead Consultants, to show him the ropes. What surprised Wolfgang was how easy it is, with the right components, to deploy and manage services automatically in the Cloud. If you’re thinking of adopting DevOps, bridging the gap between development and operations, the platform and tooling is no longer your blocker.

In this post we will show you the steps to configure, deploy, and monitor a microservice in a cloud infrastructure using some common components. First, we’ll review our choices:

Docker is the most popular example of a container, a technique for packaging applications that isolates them from local dependencies wherever they run. It’s like a very lightweight virtual machine that contains only the application and the libraries and settings it needs. With Docker we can build a container image that can run cleanly when it’s deployed, whether to development, integration, or production. The significance of images is that they’re cheap and immutable, so we can treat them like binaries that can be managed in a build pipeline.

Kubernetes is the most popular container orchestration system. A single container is easy to work with, but scaling up is too complicated to manage by hand. Kubernetes provides automated deployment, scaling, and management of containerized applications. Its implements most of the basic system administration that was traditionally done by hand or with local scripting, so that a team can focus on developing functionality rather than restarting flaky batch jobs.  

Kong is an API Gateway, an intermediary that protects services on the public internet from malicious activity, such as unauthorized access and denial of service attacks. It implements functionality that should be common across most services, and is easier to manage and automate than trying to implement locally. There are many alternatives, such as linkerd, istio, Netflix’s Zuul,  or the cloud vendors’ built-in equivalents.

The critical features of this approach are that: it’s scriptable, which means that everything can be automated, and that it scales, down as well as up. This means that teams can have a consistent experience whether they’re deploying locally for experimentation and development, deploying to an internal cluster for integration, or deploying to the cloud for production. Docker and Kubernetes insulate the team from platform-specific detail that’s not immediately relevant.

A significant recent shift is that the major cloud providers now support Docker and Kubernetes natively, which avoids having to set up a cluster by hand. From a technical point of view, the easiest option now is to do everything in the cloud, automating out everything that’s brittle or repetitive.

For this example, we’ll deploy a simple microservice, wrapped as a Docker image, into Kubernetes running on AWS, and secured by Kong. This is how the pieces fit together:

Creating a Docker image

First, we need to create a microservice and package it in a container image. We’ll start with a minimal “Hello World” service written in Java; there are many techniques and frameworks to help with this so we won’t go into the detail. For our purposes, all we need to know is that we’re going to produce an executable jar file: hello-world-1.0.0.jar

We define the features of our Docker image in a Dockerfile

# Dockerfile
 FROM openjdk:8u131-jre-alpine
 RUN addgroup -S java0 && adduser -S -G java0 java0
 COPY build/hello-world-1.0.0.jar /app.jar
 EXPOSE 8080
 USER java0
 ENTRYPOINT [ "sh", "-c", "java -jar /app.jar" ]

This Dockerfile is derived FROM the official OpenJDK distribution installed on a small Linux distribution called Alpine. An important feature of Dockerfiles is that they’re composeable. There’s a large collection of common Dockerfiles and images for extension available from the Docker site; this OpenJDK parent image is available on Docker Hub. There’s guidance on how to write Dockerfiles on the Docker site.

The rest of the script is:
RUN the unix commands to create a new java0 user, to avoid running as a privileged user;
COPY the built application jar to the image as /app.jar;
EXPOSE the relevant HTTP port to allow clients to connect to our service; and,
– declare an ENTRYPOINT for the image that starts the service, running as user java0.

We build the Docker image this file defines and tag it for uploading by running this command:

docker build -t <your-docker-username>/hello-world:v1 .

Now we login and push this new image to our Docker repository, to make it available to the rest of our environment.

docker login
 docker push <your-docker-username>/hello-world

Deploying our microservice on Kubernetes

We start by setting up a Kubernetes cluster on AWS using KOPS, a command line tool for administering Kubernetes clusters; there are instructions for creating a cluster in AWS (one difference is that we deploy into the eu-west-1 region).

To deploy our service, once again we describe the result we want to achieve in a file, this one is called hello-world-deployment.yml. It’s long enough that we’ll take it in two parts.

First, we define a Deployment which ensures that there are always two instances running of the Docker image we just created, accessible within the cluster at port 8080. Each instance runs in a separate pod, which is a group of one or more containers with shared storage and network. We want two pods to support load-balancing.

# hello-world-deployment.yml
 apiVersion: apps/v1beta1
 kind: Deployment
 metadata:
   name: hello-world
 spec:
   replicas: 2
   template:
     metadata:
       labels:
         app: web
     spec:
       containers:
       - name: hello-world
         image: <your-docker-username>/hello-world:v1
         ports:
         - containerPort: 8080

Then we define a Service, an abstraction that defines a logical set of pods and a policy for accessing them, to provide access to the application we’ve just created.

apiVersion: v1
 kind: Service
 metadata:
   name: hello-world
   labels:
     app: web
 spec:
   type: ClusterIP
   selector:
     app: web
   ports:
   - port: 8080

kubectl is the command line tool for controlling Kubernetes installations. This command uses the specification in our deployment file to spin up our application.

kubectl create -f hello-world-deployment.yml

Protecting our services with Kong

The final piece before going public with our service is to deploy an API Gateway, in our case using Kong. An API Gateway provides a secure front door to protect our infrastructure and services, and is essential before opening a system to the internet. We deploy Kong into our Kubernetes cluster (starting from step 3 because we’ve already deployed a Kubernetes cluster).

With Kong in place, we can create a rule to allow GET and POST requests on port 8080 to our “hello world” service:

curl -i -X POST \
   --url http://a99.../eu-west-1.elb.amazonaws.com:8001/apis/ \
   --data 'name=hello-world-api' \
   --data 'methods=GET,POST' \
   --data 'upstream_url=http://hello-world:8080'

Kubernetes will load-balance client requests across the two replica pods that we specified in our deployment file, so we should have some resilience if our service suddenly becomes successful.

The Kong distribution also includes a set of standard plugins, such as authentication, configuration, logging, analytics, request transformation, and rate limiting. We decide, for example, that we’re concerned about being swamped by an out-of-control bot, so we introduce rate limiting per client. Again, this is a one-line script:

curl -i -X POST \
     --url http://a99..aws.com:8001/apis/hello-world-api/plugins \
     --data "name=rate-limiting" \
     --data "config.second=100"

Finally, we can call our service and get a result:

curl http://a99...eu-west-1.elb.amazonaws.com:8000/helloworld
Hello World

This call uses Kong’s public port 8000. Kong forwards the request to the Hello World service within the cluster.

You can too

The lesson here, for those of us who have been through a couple of generations of distributed systems, is just how easy it has become to develop, deploy, and monitor complex systems. The results of over a decade of effort from the major internet companies, based on stable protocols and open source software, has been made available to the public as either tooling or services. The coincidence of several key technologies, such as containers, means that setting up and supporting relatively large installations is now a commodity rather than a rare skill.

This should change the balance of how organisational IT works. It means that development teams can do a lot more to look after their own infrastructure than previously, that operations work is less about “changing tapes” and more about providing support and guidance, and that costs become explicit allowing better trade-offs. With this in place, deployment and monitoring become part of the development process and should be included from the beginning.

As always, major technical shifts imply organisational shifts. This requires development teams to become more sophisticated about their understanding of distributed systems and, usually, to take on more production support. It requires operations groups to function at a higher-level, facilitating development teams rather than acting as gatekeepers, and paying more attention to large-scale systemic issues rather than individual services. And, of course, everyone needs a better understanding of security.

Or, to put it another way, if our CEO can do this, then you can too.

Software Engineer

Raul Rodriguez

Distinguished Consultant

Steve Freeman

Comments (0)

×

Sign up for our Updates

Sign up now for our updates.

This field is required
This field is required
This field is required

I'm interested in:

Select at least one category
You were signed up successfully.