An open source system for automating deployment, scaling, and operations of applications.

Friday, December 23, 2016

Kubernetes supports OpenAPI

Editor’s note: this post is part of a series of in-depth articles on what's new in Kubernetes 1.5 

OpenAPI allows API providers to define their operations and models, and enables developers to automate their tools and generate their favorite language’s client to talk to that API server. Kubernetes has supported swagger 1.2 (older version of OpenAPI spec) for a while, but the spec was incomplete and invalid, making it hard to generate tools/clients based on it. 

In Kubernetes 1.4, we introduced alpha support for the OpenAPI spec (formerly known as swagger 2.0 before it was donated to the Open API Initiative) by upgrading the current models and operations. Beginning in Kubernetes 1.5, the support for the OpenAPI spec has been completed by auto-generating the spec directly from Kubernetes source, which will keep the spec--and documentation--completely in sync with future changes in operations/models.

The new spec enables us to have better API documentation and we have even introduced a supported python client.

The spec is modular, divided by GroupVersion: this is future-proof, since we intend to allow separate GroupVersions to be served out of separate API servers.

The structure of spec is explained in detail in OpenAPI spec definition. We used operation’s tags to separate each GroupVersion and filled as much information as we can about paths/operations and models. For a specific operation, all parameters, method of call, and responses are documented. 

For example, OpenAPI spec for reading a pod information is:

{
...
 "paths": {
"/api/v1/namespaces/{namespace}/pods/{name}": {
   "get": {
    "description": "read the specified Pod",
    "consumes": [
     "*/*"
    ],
    "produces": [
     "application/json",
     "application/yaml",
     "application/vnd.kubernetes.protobuf"
    ],
    "schemes": [
     "https"
    ],
    "tags": [
     "core_v1"
    ],
    "operationId": "readCoreV1NamespacedPod",
    "parameters": [
     {
      "uniqueItems": true,
      "type": "boolean",
      "description": "Should the export be exact.  Exact export maintains cluster-specific fields like 'Namespace'.",
      "name": "exact",
      "in": "query"
     },
     {
      "uniqueItems": true,
      "type": "boolean",
      "description": "Should this value be exported.  Export strips fields that a user can not specify.",
      "name": "export",
      "in": "query"
     }
    ],
    "responses": {
     "200": {
      "description": "OK",
      "schema": {
       "$ref": "#/definitions/v1.Pod"
      }
     },
     "401": {
      "description": "Unauthorized"
     }
    }
   },
}

Using this information and the URL of `kube-apiserver`, one should be able to make the call to the given url (/api/v1/namespaces/{namespace}/pods/{name}) with parameters such as `name`, `exact`, `export`, etc. to get pod’s information. Client libraries generators would also use this information to create an API function call for reading pod’s information. For example, python client makes it easy to call this operation like this:

from kubernetes import client
ret = client.CoreV1Api().read_namespaced_pod(name="pods_name", namespace="default")

A simplified version of generated read_namespaced_pod, can be found here.

Swagger-codegen document generator would also be able to create documentation using the same information:

GET /api/v1/namespaces/{namespace}/pods/{name}
(readCoreV1NamespacedPod)
read the specified Pod
Path parameters
name (required)
Path Parameter — name of the Pod
namespace (required)
Path Parameter — object name and auth scope, such as for teams and projects
Consumes
This API call consumes the following media types via the Content-Type request header:
  • */*

Query parameters
pretty (optional)
Query Parameter — If 'true', then the output is pretty printed.
exact (optional)
Query Parameter — Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'.
export (optional)
Query Parameter — Should this value be exported. Export strips fields that a user can not specify.
Return type
v1.Pod

Produces
This API call produces the following media types according to the Accept request header; the media type will be conveyed by the Content-Type response header.
  • application/json
  • application/yaml
  • application/vnd.kubernetes.protobuf
Responses
200
OK v1.Pod
401
Unauthorized


There are two ways to access OpenAPI spec:
  • From `kuber-apiserver`/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific `kube-apiserver`.
  • From Kubernetes GitHub repository with all core GroupVersions enabled. You can access it on master or an specific release (for example 1.5 release).
There are numerous tools that works with this spec. For example, you can use the swagger editor to open the spec file and render documentation, as well as generate clients; or you can directly use swagger codegen to generate documentation and clients. The clients this generates will mostly work out of the box--but you will need some support for authorization and some Kubernetes specific utilities. Use python client as a template to create your own client. 

If you want to get involved in development of OpenAPI support, client libraries, or report a bug, you can get in touch with developers at SIG-API-Machinery.

--Mehdy Bohlool, Software Engineer, Google





Thursday, December 22, 2016

Cluster Federation in Kubernetes 1.5

Editor’s note: this post is part of a series of in-depth articles on what's new in Kubernetes 1.5

In the latest Kubernetes 1.5 release, you’ll notice that support for Cluster Federation is maturing. That functionality was introduced in Kubernetes 1.3, and the 1.5 release includes a number of new features, including an easier setup experience and a step closer to supporting all Kubernetes API objects.

A new command line tool called ‘kubefed’ was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:
  • DaemonSets are Kubernetes deployment rules that guarantee that a given pod is always present at every node, as new nodes are added to the cluster (more info).
  • Deployments describe the desired state of Replica Sets (more info). 
  • ConfigMaps are variables applied to Replica Sets (which greatly improves image reusability as their parameters can be externalized - more info). 
Federated DaemonSets, Federated Deployments, Federated ConfigMaps take the qualities of the base concepts to the next level. For instance, Federated DaemonSets guarantee that a pod is deployed on every node of the newly added cluster.

But what actually is “federation”? Let’s explain it by what needs it satisfies. Imagine a service that operates globally. Naturally, all its users expect to get the same quality of service, whether they are located in Asia, Europe, or the US. What this means is that the service must respond equally fast to requests at each location. This sounds simple, but there’s lots of logic involved behind the scenes. This is what Kubernetes Cluster Federation aims to do.

How does it work? One of the Kubernetes clusters must become a master by running a Federation Control Plane. In practice, this is a controller that monitors the health of other clusters, and provides a single entry point for administration. The entry point behaves like a typical Kubernetes cluster. It allows creating Replica Sets, Deployments, Services, but the federated control plane passes the resources to underlying clusters. This means that if we request the federation control plane to create a Replica Set with 1,000 replicas, it will spread the request across all underlying clusters. If we have 5 clusters, then by default each will get its share of 200 replicas.

This on its own is a powerful mechanism. But there’s more. It’s also possible to create a Federated Ingress. Effectively, this is a global application-layer load balancer. Thanks to an understanding of the application layer, it allows load balancing to be “smarter” -- for instance, by taking into account the geographical location of clients and servers, and routing the traffic between them in an optimal way.

In summary, with Kubernetes Cluster Federation, we can facilitate administration of all the clusters (single access point), but also optimize global content delivery around the globe. In the following sections, we will show how it works.

Creating a Federation Plane

In this exercise, we will federate a few clusters. For convenience, all commands have been grouped into 6 scripts available here:
  • 0-settings.sh
  • 1-create.sh
  • 2-getcredentials.sh
  • 3-initfed.sh
  • 4-joinfed.sh
  • 5-destroy.sh
First we need to define several variables (0-settings.sh)
$ cat 0-settings.sh && . 0-settings.sh
# this project create 3 clusters in 3 zones. FED_HOST_CLUSTER points to the one, which will be used to deploy federation control plane
export FED_HOST_CLUSTER=us-east1-b

# Google Cloud project name
export FED_PROJECT=<YOUR PROJECT e.g. company-project>

# DNS suffix for this federation. Federated Service DNS names are published with this suffix. This must be a real domain name that you control and is programmable by one of the DNS providers (Google Cloud DNS or AWS Route53)
export FED_DNS_ZONE=<YOUR DNS SUFFIX e.g. example.com>

And get kubectl and kubefed binaries. (for installation instructions refer to guides here and here).
Now the setup is ready to create a few Google Container Engine (GKE) clusters with gcloud container clusters create (1-create.sh). In this case one is in US, one in Europe and one in Asia.
$ cat 1-create.sh && . 1-create.sh
gcloud container clusters create gce-us-east1-b --project=${FED_PROJECT} --zone=us-east1-b --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite

gcloud container clusters create gce-europe-west1-b --project=${FED_PROJECT} --zone=europe-west1-b --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite

gcloud container clusters create gce-asia-east1-a --project=${FED_PROJECT} --zone=asia-east1-a --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite

The next step is fetching kubectl configuration with gcloud -q container clusters get-credentials (2-getcredentials.sh). The configurations will be used to indicate the current context for kubectl commands.
$ cat 2-getcredentials.sh && . 2-getcredentials.sh
gcloud -q container clusters get-credentials gce-us-east1-b --zone=us-east1-b --project=${FED_PROJECT}

gcloud -q container clusters get-credentials gce-europe-west1-b --zone=europe-west1-b --project=${FED_PROJECT}

gcloud -q container clusters get-credentials gce-asia-east1-a --zone=asia-east1-a --project=${FED_PROJECT}

Let’s verify the setup:

$ kubectl config get-contexts
CURRENT   NAME CLUSTER  AUTHINFO  NAMESPACE
*         
gke_container-solutions_europe-west1-b_gce-europe-west1-b
gke_container-solutions_europe-west1-b_gce-europe-west1-b   
gke_container-solutions_europe-west1-b_gce-europe-west1-b      
gke_container-solutions_us-east1-b_gce-us-east1-b
gke_container-solutions_us-east1-b_gce-us-east1-b           
gke_container-solutions_us-east1-b_gce-us-east1-b
gke_container-solutions_asia-east1-a_gce-asia-east1-a
gke_container-solutions_asia-east1-a_gce-asia-east1-a  
gke_container-solutions_asia-east1-a_gce-asia-east1-a

We have 3 clusters. One, indicated by the FED_HOST_CLUSTER environment variable, will be used to run the federation plane. For this, we will use the kubefed init federation command (3-initfed.sh).
$ cat 3-initfed.sh && . 3-initfed.sh
kubefed init federation --host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER} --dns-zone-name=${FED_DNS_ZONE}

You will notice that after executing the above command, a new kubectl context has appeared:
$ kubectl config get-contexts
CURRENT   NAME  CLUSTER  AUTHINFO NAMESPACE
...         
federation
federation

The federation context will become our administration entry point. Now it’s time to join clusters (4-joinfed.sh):
$ cat 4-joinfed.sh && . 4-joinfed.sh
kubefed --context=federation join cluster-europe-west1-b --cluster-context=gke_${FED_PROJECT}_europe-west1-b_gce-europe-west1-b --host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}

kubefed --context=federation join cluster-asia-east1-a --cluster-context=gke_${FED_PROJECT}_asia-east1-a_gce-asia-east1-a --host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}

kubefed --context=federation join cluster-us-east1-b --cluster-context=gke_${FED_PROJECT}_us-east1-b_gce-us-east1-b --host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}

Note that cluster
gce-us-east1-b is used here to run the federation control plane and also to work as a worker cluster. This circular dependency helps to use resources more efficiently and it can be verified by using the kubectl --context=federation get clusters command:
$ kubectl --context=federation get clusters
NAME                        STATUS    AGE
cluster-asia-east1-a        Ready     7s
cluster-europe-west1-b      Ready     10s
cluster-us-east1-b          Ready     10s

We are good to go.

Using Federation To Run An Application

In our
repository you will find instructions on how to build a docker image with a web service that displays the container’s hostname and the Google Cloud Platform (GCP) zone.

An example output might look like this:

{"hostname":"k8shserver-6we2u","zone":"europe-west1-b"}

Now we will deploy the Replica Set (k8shserver.yaml):
$ kubectl --context=federation create -f rs/k8shserver

And a Federated Service (k8shserver.yaml):
$ kubectl --context=federation create -f service/k8shserver

As you can see, the two commands refer to the “federation” context, i.e. to the federation control plane. After a few minutes, you will realize that underlying clusters run the Replica Set and the Service.

Creating The Ingress

After the Service is ready, we can create
Ingress - the global load balancer. The command is like this:

kubectl --context=federation create -f ingress/k8shserver.yaml

The contents of the file point to the service we created in the previous step:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: k8shserver
spec:
 backend:
   serviceName: k8shserver
   servicePort: 80


After a few minutes, we should get a global IP address:
$ kubectl --context=federation get ingress
NAME         HOSTS     ADDRESS          PORTS     AGE
k8shserver   *         130.211.40.125   80        20m
Effectively, the response of:


$ curl 130.211.40.125

depends on the location of client. Something like this would be expected in the US:

{"hostname":"k8shserver-w56n4","zone":"us-east1-b"}

Whereas in Europe, we might have:

{"hostname":"k8shserver-z31p1","zone":"eu-west1-b"}


Please refer to this
issue for additional details on how everything we've described works.


Demo


Summary

Cluster Federation is actively being worked on and is still not fully General Available. Some APIs are in beta and others are in alpha. Some features are missing, for instance cross-cloud load balancing is not supported (federated ingress currently only works on Google Cloud Platform as it depends on GCP HTTP(S) Load Balancing).

Nevertheless, as the functionality matures, it will become an enabler for all companies that aim at global markets, but currently cannot afford sophisticated administration techniques as used by the likes of Netflix or Amazon. That’s why we closely watch the technology, hoping that it soon fulfills its promise.

PS. When done, remember to destroy your clusters:


$ . 5-destroy.sh


--Lukasz Guminski, Software Engineer at Container Solutions. Allan Naim, Product Manager, Google



Wednesday, December 21, 2016

Windows Server Support Comes to Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what's new in Kubernetes 1.5

Extending on the theme of giving users choice, Kubernetes 1.5 release includes the support for Windows Servers. WIth more than 80% of enterprise apps running Java on Linux or .Net on Windows, Kubernetes is previewing capabilities that extends its reach to the mass majority of enterprise workloads. 

The new Kubernetes Windows Server 2016 and Windows Container support includes public preview with the following features:
  • Containerized Multiplatform Applications - Applications developed in operating system neutral languages like Go and .NET Core were previously impossible to orchestrate between Linux and Windows. Now, with support for Windows Server 2016 in Kubernetes, such applications can be deployed on both Windows Server as well as Linux, giving the developer choice of the operating system runtime. This capability has been desired by customers for almost two decades. 
  • Support for Both Windows Server Containers and Hyper-V Containers - There are two types of containers in Windows Server 2016. Windows Containers is similar to Docker containers on Linux, and uses kernel sharing. The other, called Hyper-V Containers, is more lightweight than a virtual machine while at the same time offering greater isolation, its own copy of the kernel, and direct memory assignment. Kubernetes can orchestrate both these types of containers. 
  • Expanded Ecosystem of Applications - One of the key drivers of introducing Windows Server support in Kubernetes is to expand the ecosystem of applications supported by Kubernetes: IIS, .NET, Windows Services, ASP.NET, .NET Core, are some of the application types that can now be orchestrated by Kubernetes, running inside a container on Windows Server.
  • Coverage for Heterogeneous Data Centers - Organizations already use Kubernetes to host tens of thousands of application instances across Global 2000 and Fortune 500. This will allow them to expand Kubernetes to the large footprint of Windows Server. 

The process to bring Windows Server to Kubernetes has been a truly multi-vendor effort and championed by the Windows Special Interest Group (SIG) - Apprenda, Google, Red Hat and Microsoft were all involved in bringing Kubernetes to Windows Server. On the community effort to bring Kubernetes to Windows Server, Taylor Brown, Principal Program Manager at Microsoft stated that “This new Kubernetes community work furthers Windows Server container support options for popular orchestrators, reinforcing Microsoft’s commitment to choice and flexibility for both Windows and Linux ecosystems.”

Guidance for Current Usage

Where to use Windows Server support?
Right now organizations should start testing Kubernetes on Windows Server and provide feedback. Most organizations take months to set up hardened production environments and general availability should be available in next few releases of Kubernetes.
What works?
Most of the Kubernetes constructs, such as Pods, Services, Labels, etc. work with Windows Containers.
What doesn’t work yet?
  • Pod abstraction is not same due to networking namespaces. Net result is that Windows containers in a single POD cannot communicate over localhost. Linux containers can share networking stack by placing them in the same network namespace.
  • DNS capabilities are not fully implemented
  • UDP is not supported inside a container
When will it be ready for all production workloads (general availability)?
The goal is to refine the networking and other areas that need work to get Kubernetes users a production version of Windows Server 2016 - including with Windows Nano Server and Windows Server Core installation options - support in the next couple releases.

Technical Demo




Roadmap

Support for Windows Server-based containers is in alpha release mode for Kubernetes 1.5, but the community is not stopping there. Customers want enterprise hardened container scheduling and management for their entire tech portfolio. That has to include full parity of features among Linux and Windows Server in production. The Windows Server SIG will deliver that parity within the next one or two releases of Kubernetes through a few key areas of investment:
  • Networking - the SIG will continue working side by side with Microsoft to enhance the networking backbone of Windows Server Containers, specifically around lighting up container mode networking and native network overlay support for container endpoints. 
  • OOBE - Improving the setup, deployment, and diagnostics for a Windows Server node, including the ability to deploy to any cloud (Azure, AWS, GCP)
  • Runtime Operations - the SIG will play a key part in defining the monitoring interface of the Container Runtime Interface (CRI), leveraging it to provide deep insight and monitoring for Windows Server-based containers
Get Started

To get started with Kubernetes on Windows Server 2016, please visit the GitHub guide for more details.
If you want to help with Windows Server support, then please connect with the Windows Server SIG or connect directly with Michael Michael, the SIG lead, on GitHub

--Michael Michael, Senior Director of Product Management, Apprenda 


Kubernetes on Windows Server 2016 Architecture