An open source system for automating deployment, scaling, and operations of applications.

Wednesday, May 31, 2017

Draft: Kubernetes container development made easy

Today's post is by Brendan Burns, Director of Engineering at Microsoft Azure and Kubernetes co-founder.

About a month ago Microsoft announced the acquisition of Deis to expand our expertise in containers and Kubernetes. Today, I’m excited to announce a new open source project derived from this newly expanded Azure team: Draft. 

While by now the strengths of Kubernetes for deploying and managing applications at scale are well understood. The process of developing a new application for Kubernetes is still too hard. It’s harder still if you are new to containers, Kubernetes, or developing cloud applications.

Draft fills this role. As it’s name implies it is a tool that helps you begin that first draft of a containerized application running in Kubernetes. When you first run the draft tool, it automatically discovers the code that you are working on and builds out the scaffolding to support containerizing your application. Using heuristics and a variety of pre-defined project templates draft will create an initial Dockerfile to containerize your application, as well as a Helm Chart to enable your application to be deployed and maintained in a Kubernetes cluster. Teams can even bring their own draft project templates to customize the scaffolding that is built by the tool.

But the value of draft extends beyond simply scaffolding in some files to help you create your application. Draft also deploys a server into your existing Kubernetes cluster that is automatically kept in sync with the code on your laptop. Whenever you make changes to your application, the draft daemon on your laptop synchronizes that code with the draft server in Kubernetes and a new container is built and deployed automatically without any user action required. Draft enables the “inner loop” development experience for the cloud.

Of course, as is the expectation with all infrastructure software today, Draft is available as an open source project, and it itself is in “draft” form :) We eagerly invite the community to come and play around with draft today, we think it’s pretty awesome, even in this early form. But we’re especially excited to see how we can develop a community around draft to make it even more powerful for all developers of containerized applications on Kubernetes.

To give you a sense for what Draft can do, here is an example drawn from the Getting Started page in the GitHub repository.

There are multiple example applications included within the examples directory. For this walkthrough, we'll be using the python example application which uses Flask to provide a very simple Hello World webserver.

$ cd examples/python

Draft Create

We need some "scaffolding" to deploy our app into a Kubernetes cluster. Draft can create a Helm chart, a Dockerfile and a draft.toml with draft create:

$ draft create
--> Python app detected
--> Ready to sail
$ ls
Dockerfile  app.py  chart/  draft.toml  requirements.txt

The chart/ and Dockerfile assets created by Draft default to a basic Python configuration. This Dockerfile harnesses the python:onbuild image, which will install the dependencies in requirements.txt and copy the current directory into /usr/src/app. And to align with the service values in chart/values.yaml, this Dockerfile exposes port 80 from the container.

The draft.toml file contains basic configuration about the application like the name, which namespace it will be deployed to, and whether to deploy the app automatically when local files change.

$ cat draft.toml
[environments]
 [environments.development]
   name = "tufted-lamb"
   namespace = "default"
   watch = true
   watch_delay = 2

Draft Up

Now we're ready to deploy app.py to a Kubernetes cluster.
Draft handles these tasks with one draft up command:
  • reads configuration from draft.toml
  • compresses the chart/ directory and the application directory as two separate tarballs
  • uploads the tarballs to draftd, the server-side component
  • draftd then builds the docker image and pushes the image to a registry
  • draftd instructs helm to install the Helm chart, referencing the Docker registry image just built
With the watch option set to true, we can let this run in the background while we make changes later on…

$ draft up
--> Building Dockerfile
Step 1 : FROM python:onbuild
onbuild: Pulling from library/python
...
Successfully built 38f35b50162c
--> Pushing docker.io/microsoft/tufted-lamb:5a3c633ae76c9bdb81b55f5d4a783398bf00658e
The push refers to a repository [docker.io/microsoft/tufted-lamb]
...
5a3c633ae76c9bdb81b55f5d4a783398bf00658e: digest: sha256:9d9e9fdb8ee3139dd77a110fa2d2b87573c3ff5ec9c045db6009009d1c9ebf5b size: 16384
--> Deploying to Kubernetes
   Release "tufted-lamb" does not exist. Installing it now.
--> Status: DEPLOYED
--> Notes:
    1. Get the application URL by running these commands:
    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
          You can watch the status of by running 'kubectl get svc -w tufted-lamb-tufted-lamb'
 export SERVICE_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
 echo http://$SERVICE_IP:80

Watching local files for changes...

Interact with the Deployed App

Using the handy output that follows successful deployment, we can now contact our app. Note that it may take a few minutes before the load balancer is provisioned by Kubernetes. Be patient!

$ export SERVICE_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ curl http://$SERVICE_IP

When we curl our app, we see our app in action! A beautiful "Hello World!" greets us.

Update the App

Now, let's change the "Hello, World!" output in app.py to output "Hello, Draft!" instead:

$ cat <<EOF > app.py
from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
   return "Hello, Draft!\n"

if __name__ == "__main__":
   app.run(host='0.0.0.0', port=8080)
EOF

Draft Up(grade)

Now if we watch the terminal that we initially called draft up with, Draft will notice that there were changes made locally and call draft up again. Draft then determines that the Helm release already exists and will perform a helm upgrade rather than attempting another helm install:

--> Building Dockerfile
Step 1 : FROM python:onbuild
...
Successfully built 9c90b0445146
--> Pushing docker.io/microsoft/tufted-lamb:f031eb675112e2c942369a10815850a0b8bf190e
The push refers to a repository [docker.io/microsoft/tufted-lamb]
...
--> Deploying to Kubernetes
--> Status: DEPLOYED
--> Notes:
    1. Get the application URL by running these commands:
    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
          You can watch the status of by running 'kubectl get svc -w tufted-lamb-tufted-lamb'
 export SERVICE_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
 echo http://$SERVICE_IP:80

Now when we run curl http://$SERVICE_IP, our first app has been deployed and updated to our Kubernetes cluster via Draft!
We hope this gives you a sense for everything that Draft can do to streamline development for Kubernetes. Happy drafting!

--Brendan Burns, Director of Engineering, Microsoft Azure



  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Connect with the community on Slack
  • Get involved with the Kubernetes project on GitHub






Managing microservices with the Istio service mesh


Today’s post is by the Istio team showing how you can get visibility, resiliency, security and control for your microservices in Kubernetes. 

Services are at the core of modern software architecture. Deploying a series of modular, small (micro-)services rather than big monoliths gives developers the flexibility to work in different languages, technologies and release cadence across the system; resulting in higher productivity and velocity, especially for larger teams.

With the adoption of microservices, however, new problems emerge due to the sheer number of services that exist in a larger system. Problems that had to be solved once for a monolith, like security, load balancing, monitoring, and rate limiting need to be handled for each service.

Kubernetes and Services

Kubernetes supports a microservices architecture through the Service construct. It allows developers to abstract away the functionality of a set of Pods, and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.

Istio, announced last week at GlueCon 2017, addresses these problems in a fundamental way through a service mesh framework. With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest – traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts. Istio uses Envoy as its runtime proxy component and provides an extensible intermediation layer which allows global cross-cutting policy enforcement and telemetry collection.

The current release of Istio is targeted to Kubernetes users and is packaged in a way that you can install in a few lines and get visibility, resiliency, security and control for your microservices in Kubernetes out of the box.

In a series of blog posts, we'll look at a simple application that is composed of 4 separate microservices. We'll start by looking at how the application can be deployed using plain Kubernetes. We'll then deploy the exact same services into an Istio-enabled cluster without changing any of the application code -- and see how we can observe metrics. 

In subsequent posts, we’ll focus on more advanced capabilities such as HTTP request routing, policy, identity and security management.

Example Application: BookInfo

We will use a simple application called BookInfo, that displays information, reviews and ratings for books in a store. The application is composed of four microservices written in different languages:

BookInfo-all (2).png Since the container images for these microservices can all be found in Docker Hub, all we need to deploy this application in Kubernetes are the yaml configurations.

It’s worth noting that these services have no dependencies on Kubernetes and Istio, but make an interesting case study. Particularly, the multitude of services, languages and versions for the reviews service make it an interesting service mesh example. More information about this example can be found here.

Running the Bookinfo Application in Kubernetes
In this post we’ll focus on the v1 version of the app:

BookInfo-v1 (3).png
Deploying it with Kubernetes is straightforward, no different than deploying any other services. Service and Deployment resources for the productpage microservice looks like this:

apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
  app: productpage
spec:
type: NodePort
ports:
- port: 9080
  name: http
selector:
  app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
  metadata:
    labels:
      app: productpage
      track: stable
  spec:
    containers:
    - name: productpage
      image: istio/examples-bookinfo-productpage-v1
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 9080


The other two services that we will need to deploy if we want to run the app are
details and reviews-v1. We don’t need to deploy the ratings service at this time because v1 of the reviews service doesn’t use it. The remaining services follow essentially the same pattern as productpage. The yaml files for all services can be found here.

To run the services as an ordinary Kubernetes app:
kubectl apply -f bookinfo-v1.yaml

To access the application from outside the cluster we’ll need the NodePort address of the productpage service:

export BOOKINFO_URL=$(kubectl get po -l app=productpage -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc productpage -o jsonpath={.spec.ports[0].nodePort})

We can now point the browser to http://$BOOKINFO_URL/productpage, and see:



Running the Bookinfo Application with Istio

Now that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to install Istio in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons. We can now delete the previous app and start the Bookinfo app again using the exact same yaml file, this time with Istio:

kubectl delete -f bookinfo-v1.yaml
kubectl apply -f <(istioctl kube-inject -f bookinfo-v1.yaml)

Notice that this time we use the istioctl kube-inject command to modify bookinfo-v1.yaml before creating the deployments. It injects the Envoy sidecar into the Kubernetes pods as documented here. Consequently, all of the microservices are packaged with an Envoy sidecar that manages incoming and outgoing traffic for the service.

In the Istio service mesh we will not want to access the application productpage directly, as we did in plain Kubernetes. Instead, we want an Envoy sidecar in the request path so that we can use Istio’s management features (version routing, circuit breakers, policies, etc.) to control external calls to productpage, just like we can for internal requests. Istio’s Ingress controller is used for this purpose.

To use the Istio Ingress controller, we need to create a Kubernetes Ingress resource for the app, annotated with kubernetes.io/ingress.class: "istio", like this:

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bookinfo
annotations:
  kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
    paths:
    - path: /productpage
      backend:
        serviceName: productpage
        servicePort: 9080
    - path: /login
      backend:
        serviceName: productpage
        servicePort: 9080
    - path: /logout
      backend:
        serviceName: productpage
        servicePort: 9080
EOF

The resulting deployment with Istio and v1 version of the bookinfo app looks like this:

BookInfo-v1-Istio (5).png
This time we will access the app using the NodePort address of the Istio Ingress controller:

export BOOKINFO_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath={.spec.ports[0].nodePort})

We can now load the page at http://$BOOKINFO_URL/productpage and once again see the running app -- there should be no difference from the previous deployment without Istio for the user.

However, now that the application is running in the Istio service mesh, we can immediately start to see some benefits. 

Metrics collection

The first thing we get from Istio out-of-the-box is the collection of metrics in Prometheus. These metrics are generated by the Istio filter in Envoy, collected according to default rules (which can be customized), and then sent to Prometheus. The metrics can be visualized in the Istio dashboard in Grafana. Note that while Prometheus is the out-of-the-box default metrics backend, Istio allows you to plug in to others, as we’ll demonstrate in future blog posts.

To demonstrate, we'll start by running the following command to generate some load on the application:

wrk -t1 -c1 -d20s http://$BOOKINFO_URL/productpage

We obtain Grafana’s NodePort URL:

export GRAFANA_URL=$(kubectl get po -l app=grafana -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc grafana -o jsonpath={.spec.ports[0].nodePort})

We can now open a browser at http://$GRAFANA_URL/dashboard/db/istio-dashboard and examine the various performance metrics for each of the Bookinfo services:

istio-dashboard-k8s-blog.png

Distributed tracing The next thing we get from Istio is call tracing with Zipkin. We obtain its NodePort URL:

export ZIPKIN_URL=$(kubectl get po -l app=zipkin -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc zipkin -o jsonpath={.spec.ports[0].nodePort})

We can now point a browser at http://$ZIPKIN_URL/ to see request trace spans through the Bookinfo services.


Although the Envoy proxies send trace spans to Zipkin out-of-the-box, to leverage its full potential, applications need to be Zipkin aware and forward some headers to tie the individual spans together. See zipkin-tracing for details.
Holistic view of the entire fleet The metrics that Istio provides are much more than just a convenience. They provide a consistent view of the service mesh, by generating uniform metrics throughout. We don’t have to worry about reconciling different types of metrics emitted by various runtime agents, or add arbitrary agents to gather metrics for legacy uninstrumented apps. We also no longer have to rely on the development process to properly instrument the application to generate metrics. The service mesh sees all the traffic, even into and out of legacy "black box" services, and generates metrics for all of it. Summary The demo above showed how in a few steps, we can launch Istio-backed services and observe L7 metrics on them. Over the next weeks we’ll follow on with demonstration of more Istio capabilities like policy management and HTTP request routing. Google, IBM and Lyft joined forces to create Istio based on our common experiences building and operating large and complex microservice deployments for internal and enterprise customers. Istio is an industry-wide community effort. We’ve been thrilled to see the enthusiasm from the industry partners and the insights they brought. As we take the next step and release Istio to the wild, we cannot wait to see what the broader community of contributors will bring to it. If you’re using or considering to use a microservices architecture on Kubernetes, we encourage you to give Istio a try, learn about it more at istio.io, let us know what you think, or better yet, join the developer community to help shape its future!
--On behalf of the Istio team. Frank Budinsky, Software Engineer at IBM, Andra Cismaru, Software Engineer and Israel Shalom, Product Manager at Google.


  • Get involved with the Kubernetes project on GitHub 
  • Post questions (or answer questions) on Stack Overflow 
  • Connect with the community on Slack
  • Follow us on Twitter @Kubernetesio for latest updates