An open source system for automating deployment, scaling, and operations of applications.

Thursday, November 16, 2017

Certified Kubernetes Conformance Program: Launch Celebration Round Up


This week the CNCF certified the first group of KubernetesⓇ offerings under the Certified Kubernetes Conformance Program. These first certifications follow a beta phase during which we invited participants to submit conformance results. The community response was overwhelming: CNCF certified offerings from 32 vendors!

The new Certified Kubernetes Conformance Program gives enterprise organizations the confidence that workloads running on any Certified Kubernetes distribution or platform will work correctly on other Certified Kubernetes distributions or platforms. A Certified Kubernetes product guarantees that the complete Kubernetes API functions as specified, so users can rely on a seamless, stable experience.

Here’s what the world had to say about the Certified Kubernetes Conformance Program.

Press coverage:
Community blog round-up:

Visit https://www.cncf.io/certification/software-conformance for more information about the Certified Kubernetes Conformance Program, and learn how you can join a growing list of Certified Kubernetes providers.

“Cloud Native Computing Foundation”, “CNCF” and “Kubernetes” are registered trademarks of The Linux Foundation in the United States and other countries. “Certified Kubernetes” and the Certified Kubernetes design are trademarks of The Linux Foundation in the United States and other countries.

Wednesday, November 15, 2017

Kubernetes is Still Hard (for Developers)


Kubernetes has made the Ops experience much easier, but how does the developer experience compare? Ops teams can deploy a Kubernetes cluster in a matter of minutes. But developers need to understand a host of new concepts before beginning to work with Kubernetes. This can be a tedious and manual process, but it doesn’t have to be. In this talk, Michelle Noorali, co-lead of SIG-Apps, reimagines the Kubernetes developer experience. She shares her top 3 tips for building a successful developer experience including:
  1. A framework for thinking about cloud native applications
  2. An integrated experience for debugging and fine-tuning cloud native applicationsA way to get a cloud native application out the door quickly
Interested in learning how far the Kubernetes developer experience has come? Join us at KubeCon in Austin on December 6-8. Register Now >>

Check out Michelle’s keynote to learn about exciting new updates from CNCF projects.

Friday, November 3, 2017

Securing Software Supply Chain with Grafeas

Editor's note: This post is written by Kelsey Hightower, Staff Developer Advocate at Google, and Sandra Guo, Product Manager at Google.

Kubernetes has evolved to support increasingly complex classes of applications, enabling the development of two major industry trends: hybrid cloud and microservices. With increasing complexity in production environments, customers—especially enterprises—are demanding better ways to manage their software supply chain with more centralized visibility and control over production deployments.

On October 12th, Google and partners announced Grafeas, an open source initiative to define a best practice for auditing and governing the modern software supply chain. With Grafeas (“scribe” in Greek), developers can plug in components of the CI/CD pipeline into a central source of truth for tracking and enforcing policies. Google is also working on Kritis (“judge” in Greek), allowing devOps teams to enforce deploy-time image policy using metadata and attestations stored in Grafeas.

Grafeas allows build, auditing and compliance tools to exchange comprehensive metadata on container images using a central API. This allows enforcing policies that provide central control over the software supply process.



Example application: PaymentProcessor


Let’s consider a simple application, PaymentProcessor, that retrieves, processes and updates payment info stored in a database. This application is made up of two containers: a standard ruby container and custom logic.


Due to the sensitive nature of the payment data, the developers and DevOps team really want to make sure that the code meets certain security and compliance requirements, with detailed records on the provenance of this code. There are CI/CD stages that validate the quality of the PaymentProcessor release, but there is no easy way to centrally view/manage this information:



Visibility and governance over the PaymentProcessor Code

Grafeas provides an API for customers to centrally manage metadata created by various CI/CD components and enables deploy time policy enforcement through a Kritis implementation.



Let’s consider a basic example of how Grafeas can provide deploy time control for the PaymentProcessor app using a demo verification pipeline.

Assume that a PaymentProcessor container image has been created and pushed to Google Container Registry. This example uses the gcr.io/exampleApp/PaymentProcessor container for testing. You as the QA engineer want to create an attestation certifying this image for production usage. Instead of trusting an image tag like 0.0.1, which can be reused and point to a different container image later, we can trust the image digest to ensure the attestation links to the full image contents.

1. Set up the environment

Generate a signing key:


gpg --quick-generate-key --yes qa_bob@example.com

Export the image signer's public key:


gpg --armor --export image.signer@example.com > ${GPG_KEY_ID}.pub

Create the ‘qa’ AttestationAuthority note via the Grafeas API:


curl -X POST \
 "http://127.0.0.1:8080/v1alpha1/projects/image-signing/notes?noteId=qa" \
 -d @note.json

Create the Kubernetes ConfigMap for admissions control and store the QA signer's public key:


kubectl create configmap image-signature-webhook \
 --from-file ${GPG_KEY_ID}.pub
kubectl get configmap image-signature-webhook -o yaml

Set up an admissions control webhook to require QA signature during deployment.


kubectl apply -f kubernetes/image-signature-webhook.yaml

2. Attempt to deploy an image without QA attestation

Attempt to run the image in paymentProcessor.ymal before it is QA attested:


kubectl apply -f pods/nginx.yaml
apiVersion: v1
kind: Pod
metadata:
 name: payment
spec:
 containers:
   - name: payment
     image: "gcr.io/hightowerlabs/payment@sha256:aba48d60ba4410ec921f9d2e8169236c57660d121f9430dc9758d754eec8f887"

Create the paymentProcessor pod:


kubectl apply -f pods/paymentProcessor.yaml

Notice the paymentProcessor pod was not created and the following error was returned:


The  "" is invalid: : No matched signatures for container image: gcr.io/hightowerlabs/payment@sha256:aba48d60ba4410ec921f9d2e8169236c57660d121f9430dc9758d754eec8f887

3. Create an image signature

Assume the image digest is stored in Image-digest.txt, sign the image digest:


gpg -u qa_bob@example.com \
 --armor \
 --clearsign \
 --output=signature.gpg \
 Image-digest.txt

4. Upload the signature to the Grafeas API

Generate a pgpSignedAttestation occurrence from the signature :


cat > occurrence.json <<EOF
{
 "resourceUrl": "$(cat image-digest.txt)",
 "noteName": "projects/image-signing/notes/qa",
 "attestation": {
   "pgpSignedAttestation": {
      "signature": "$(cat signature.gpg)",
      "contentType": "application/vnd.gcr.image.url.v1",
      "pgpKeyId": "${GPG_KEY_ID}"
   }
 }
}
EOF

Upload the attestation through the Grafeas API:


curl -X POST \
 'http://127.0.0.1:8080/v1alpha1/projects/image-signing/occurrences' \
 -d @occurrence.json


5. Verify QA attestation during a production deployment


Attempt to run the image in paymentProcessor.ymal now that it has the correct attestation in the Grafeas API:


kubectl apply -f pods/paymentProcessor.yaml
pod "PaymentProcessor" created

With the attestation added the pod will be created as the execution criteria are met.

For more detailed information, see this Grafeas tutorial.

Summary

The demo above showed how you can integrate your software supply chain with Grafeas and gain visibility and control over your production deployments. However, the demo verification pipeline by itself is not a full Kritis implementation. In addition to basic admission control, Kritis provides additional support for workflow enforcement, multi-authority signing, breakglass deployment and more. You can read the Kritis whitepaper for more details. The team is actively working on a full open-source implementation. We’d love your feedback!

In addition, a hosted alpha implementation of Kritis, called Binary Authorization, is available on Google Container Engine and will be available for broader consumption soon.

Google, JFrog, and other partners joined forces to create Grafeas based on our common experiences building secure, large, and complex microservice deployments for internal and enterprise customers. Grafeas is an industry-wide community effort.

To learn more about Grafeas and contribute to the project:
We hope you join us!
The Grafeas Team

Thursday, November 2, 2017

Containerd Brings More Container Runtime Options for Kubernetes

Editor's note: Today's post is by Lantao Liu, Software Engineer at Google, and Mike Brown, Open Source Developer Advocate at IBM.

A container runtime is software that executes containers and manages container images on a node. Today, the most widely known container runtime is Docker, but there are other container runtimes in the ecosystem, such as rkt, containerd, and lxd. Docker is by far the most common container runtime used in production Kubernetes environments, but Docker’s smaller offspring, containerd, may prove to be a better option. This post describes using containerd with Kubernetes.

Kubernetes 1.5 introduced an internal plugin API named Container Runtime Interface (CRI) to provide easy access to different container runtimes. CRI enables Kubernetes to use a variety of container runtimes without the need to recompile. In theory, Kubernetes could use any container runtime that implements CRI to manage pods, containers and container images.

Over the past 6 months, engineers from Google, Docker, IBM, ZTE, and ZJU have worked to implement CRI for containerd. The project is called cri-containerd, which had its feature complete v1.0.0-alpha.0 release on September 25, 2017. With cri-containerd, users can run Kubernetes clusters using containerd as the underlying runtime without Docker installed.


containerd


Containerd is an OCI compliant core container runtime designed to be embedded into larger systems. It provides the minimum set of functionality to execute containers and manages images on a node. It was initiated by Docker Inc. and donated to CNCF in March of 2017. The Docker engine itself is built on top of earlier versions of containerd, and will soon be updated to the newest version. Containerd is close to a feature complete stable release, with 1.0.0-beta.1 available right now.

Containerd has a much smaller scope than Docker, provides a golang client API, and is more focused on being embeddable.The smaller scope results in a smaller codebase that’s easier to maintain and support over time, matching Kubernetes requirements as shown in the following table:




Containerd Scope (In/Out)
Kubernetes Requirement
Container Lifecycle Management
In
Container Create/Start/Stop/Delete/List/Inspect (✔️)
Image Management
In
Pull/List/Inspect (✔️)
Networking
Out
No concrete network solution. User can setup network namespace and put containers into it.
Kubernetes networking deals with pods, rather than containers, so
container runtimes should not provide complex networking solutions that
don't satisfy requirements. (✔️)

Volumes
Out
No volume management. User can setup host path, and mount it into container.
Kubernetes manages volumes. Container runtimes should not provide internal volume management that may conflict with Kubernetes. (✔️)
Persistent Container Logging
Out
No persistent container log. Container STDIO is provided as FIFOs, which can be redirected/decorated as is required.
Kubernetes has specific requirements for persistent container logs, such as format and path etc. Container runtimes should not  persist an unmanageable container log. (✔️)
Metrics
In
Containerd provides container and snapshot metrics as part of the API.
Kubernetes expects container runtime to provide container metrics (CPU, Memory, writable layer size, etc.) and image filesystem usage (disk, inode usage, etc.). (✔️)

Overall, from a technical perspective, containerd is a very good alternative container runtime for Kubernetes.


cri-containerd


Cri-containerd is exactly that: an implementation of CRI for containerd. It operates on the same node as the Kubelet and containerd. Layered between Kubernetes and containerd, cri-containerd handles all CRI service requests from the Kubelet and uses containerd to manage containers and container images. Cri-containerd manages these service requests in part by forming containerd service requests while adding sufficient additional function to support the CRI requirements.



Compared with the current Docker CRI implementation (dockershim), cri-containerd eliminates an extra hop in the stack, making the stack more stable and efficient.

Architecture

Cri-containerd uses containerd to manage the full container lifecycle and all container images. As also shown below, cri-containerd manages pod networking via CNI (another CNCF project).



Let’s use an example to demonstrate how cri-containerd works for the case when Kubelet creates a single-container pod:
  1. Kubelet calls cri-containerd, via the CRI runtime service API, to create a pod;
  2. cri-containerd uses containerd to create and start a special pause container (the sandbox container) and put that container inside the pod’s cgroups and namespace (steps omitted for brevity);
  3. cri-containerd configures the pod’s network namespace using CNI;
  4. Kubelet subsequently calls cri-containerd, via the CRI image service API, to pull the application container image;
  5. cri-containerd further uses containerd to pull the image if the image is not present on the node;
  6. Kubelet then calls cri-containerd, via the CRI runtime service API, to create and start the application container inside the pod using the pulled container image;
  7. cri-containerd finally calls containerd to create the application container, put it inside the pod’s cgroups and namespace, then to start the pod’s new application container.
After these steps, a pod and its corresponding application container is created and running.

Status

Cri-containerd v1.0.0-alpha.0 was released on Sep. 25, 2017.

It is feature complete. All Kubernetes features are supported.

All CRI validation tests have passed. (A CRI validation is a test framework for validating whether a CRI implementation meets all the requirements expected by Kubernetes.)

All regular node e2e tests have passed. (The Kubernetes test framework for testing Kubernetes node level functionalities such as managing pods, mounting volumes etc.)

To learn more about the v1.0.0-alpha.0 release, see the project repository.

Try it Out


For a multi-node cluster installer and bring up steps using ansible and kubeadm, see this repo link.

For creating a cluster from scratch on Google Cloud, see Kubernetes the Hard Way.

For a custom installation from release tarball, see this repo link.

For a installation with LinuxKit on a local VM, see this repo link.


Next Steps

We are focused on stability and usability improvements as our next steps.

  • Stability:
    • Set up a full set of Kubernetes integration test in the Kubernetes test infrastructure on various OS distros such as Ubuntu, COS (Container-Optimized OS) etc.
    • Actively fix any test failures and other issues reported by users.

  • Usability:
    • Improve the user experience of crictl. Crictl is a portable command line tool for all CRI container runtimes. The goal here is to make it easy to use for debug and development scenarios.
    • Integrate cri-containerd with kube-up.sh, to help users bring up a production quality Kubernetes cluster using cri-containerd and containerd.
    • Improve our documentation for users and admins alike.

We plan to release our v1.0.0-beta.0 by the end of 2017.


Contribute

Cri-containerd is a Kubernetes incubator project located at https://github.com/kubernetes-incubator/cri-containerd. Any contributions in terms of ideas, issues, and/or fixes are welcome. The getting started guide for developers is a good place to start for contributors.

Community


Cri-containerd is developed and maintained by the Kubernetes SIG-Node community. We’d love to hear feedback from you. To join the community:

Wednesday, November 1, 2017

Kubernetes the Easy Way

Editor's note: Today's post is by Dan Garfield, VP of Marketing at Codefresh, on how to set up and easily deploy a Kubernetes cluster.

Kelsey Hightower wrote an invaluable guide for Kubernetes called Kubernetes the Hard Way. It’s an awesome resource for those looking to understand the ins and outs of Kubernetes—but what if you want to put Kubernetes on easy mode? That’s something we’ve been working on together with Google Cloud. In this guide, we’ll show you how to get a cluster up and running, as well as how to actually deploy your code to that cluster and run it.

This is Kubernetes the easy way. 

What We’ll Accomplish

  1. Set up a cluster
  2. Deploy an application to the cluster
  3. Automate deployment with rolling updates

Prerequisites

  • A containerized application
  • A Google Cloud Account or a Kubernetes cluster on another provider
    • Everything after Cluster creation is identical with all providers.
  • A free account on Codefresh
    • Codefresh is a service that handles Kubernetes deployment configuration and automation. 
We made Codefresh free for open-source projects and offer 200 builds/mo free for private projects, to make adopting Kubernetes as easy as possible. Deploy as much as you like on as many clusters as you like. 

Set Up a Cluster

1. Create an account at cloud.google.com and log in.

Note: If you’re using a Cluster outside of Google Cloud, you can skip this step.

Google Container Engine is Google Cloud’s managed Kubernetes service. In our testing, it’s both powerful and easy to use.

If you’re new to the platform, you can get a $500 credit at the end of this process.

2. Open the menu and scroll down to Container Engine. Then select Container Clusters.



3. Click Create cluster.

We’re done with step 1. In my experience it usually takes less than 5 minutes for a cluster to be created. 

Deploy an Application to Kubernetes

First go to Codefresh and create an account using Github, Bitbucket, or Gitlab. As mentioned previously, Codefresh is free for both open source and smaller private projects. We’ll use it to create the configuration Yaml necessary to deploy our application to Kubernetes. Then we'll deploy our application and automate the process to happen every time we commit code changes. Here are the steps:
  1. Create a Codefresh account
  2. Connect to Google Cloud (or other cluster)
  3. Add Cluster
  4. Deploy static image
  5. Build and deploy an image
  6. Automate the process

Connect to Google Cloud

To connect your Clusters in Google Container Engine, go to Account Settings > Integrations > Kubernetes and click Authenticate. This prompts you to login with your Google credentials.

Once you log in, all of your clusters are available within Codefresh.


Add Cluster

To add your cluster, click the down arrow, and then click add cluster, select the project and cluster name. You can now deploy images!

Optional: Use an Alternative Cluster

To connect a non-GKE cluster we’ll need to add a token and certificate to Codefresh. Go to Account Settings (bottom left) > Integrations > Kubernetes > Configure > Add Provider > Custom Providers. Expand the dropdown and click Add Cluster.



Follow the instructions on how to generate the needed information and click Save. Your cluster now appears under the Kubernetes tab. 

Deploy Static Image to Kubernetes

Now for the fun part! Codefresh provides an easily modifiable boilerplate that takes care of the heavy lifting of configuring Kubernetes for your application.

1. Click on the Kubernetes tab: this shows a list of namespaces.

Think of namespaces as acting a bit like VLANs on a Kubernetes cluster. Each namespace can contain all the services that need to talk to each other on a Kubernetes cluster. For now, we’ll just work off the default namespace (the easy way!).

2. Click Add Service and fill in the details.

You can use the demo application I mentioned earlier that has a Node.js frontend with a MongoDB.

Here’s the info we need to pass:

Cluster - This is the cluster we added earlier, our application will be deployed there.
Namespace - We’ll use default for our namespace but you can create and use a new one if you’d prefer. Namespaces are discrete units for grouping all the services associated with an application.
Service name - You can name the service whatever you like. Since we’re deploying Mongo, I’ll just name it mongo!
Expose port - We don’t need to expose the port outside of our cluster so we won’t check the box for now but we will specify a port where other containers can talk to this service. Mongo’s default port is ‘27017’.
Image - Mongo is a public image on Dockerhub, so I can reference it by name and tag, ‘mongo:latest’.
Internal Ports - This is the port the mongo application listens on, in this case it’s ‘27017’ again.

We can ignore the other options for now.

3. Scroll down and click Deploy.



Boom! You’ve just deployed this image to Kubernetes. You can see by clicking on the status that the service, deployment, replicas, and pods are all configured and running. If you click Edit > Advanced, you can see and edit all the raw YAML files associated with this application, or copy them and put them into your repository for use on any cluster. 

Build and Deploy an Image

To get the rest of our demo application up and running we need to build and deploy the Node.js portion of the application. To do that we’ll need to add our repository to Codefresh.

1. Click on Repositories > Add Repository, then copy and paste the demochat repo url (or use your own repo).



We have the option to use a dockerfile, or to use a template if we need help creating a dockerfile. In this case, the demochat repo already has a dockerfile so we’ll select that. Click through the next few screens until the image builds.

Once the build is finished the image is automatically saved inside of the Codefresh docker registry. You can also add any other registry to your account and use that instead.

To deploy the image we’ll need
  • a pull secret
  • the image name and registry
  • the ports that will be used

Creating the Pull Secret

The pull secret is a token that the Kubernetes cluster can use to access a private Docker registry. To create one, we’ll need to generate the token and save it to Codefresh.

1. Click on User Settings (bottom left) and generate a new token.

2. Copy the token to your clipboard.



3. Go to Account Settings > Integrations > Docker Registry > Add Registry and select Codefresh Registry. Paste in your token and enter your username (entry is case sensitive). Your username must match your name displayed at the bottom left of the screen.

4. Test and save it.

We’ll now be able to create our secret later on when we deploy our image.

Get the image name

1. Click on Images and open the image you just built. Under Comment you’ll see the image name starting with r.cfcr.io.



2. Copy the image name; we’ll need to paste it in later.

Deploy the private image to Kubernetes

We’re now ready to deploy the image we built.

1. Go to the Kubernetes page and, like we did with mongo, click Add Service and fill out the page. Make sure to select the same namespace you used to deploy mongo earlier. 



Now let’s expose the port so we can access this application. This provisions an IP address and automatically configures ingress.

2. Click Deploy: your application will be up and running within a few seconds! The IP address may take longer to provision depending on your cluster location.



From this view you can scale the replicas, see application status, and similar tasks.

3. Click on the IP address to view the running application.

At this point you should have your entire application up and running! Not so bad huh? Now to automate deployment!

Automate Deployment to Kubernetes

Every time we make a change to our application, we want to build a new image and deploy it to our cluster. We’ve already set up automated builds, but to automate deployment:

1. Click on Repositories (top left).

2. Click on the pipeline for the demochat repo (the gear icon).



3. It’s a good idea to run some tests before deploying. Under Build and Unit Test, add npm test for the unit test script.

4. Click Deploy Script and select Kubernetes (Beta). Enter the information for the service you’ve already deployed.



You can see the option to use a deployment file from your repo, or to use the deployment file that you just generated.

5. Click Save.

You’re done with deployment automation! Now whenever a change is made, the image will build, test, and deploy. 

Conclusions

We want to make it easy for every team, not just big enterprise teams, to adopt Kubernetes while preserving all of Kubernetes’ power and flexibility. At any point on the Kubernetes service screen you can switch to YAML to view all of the YAMLfiles generated by the configuration you performed in this walkthrough. You can tweak the file content, copy and paste them into local files, etc.

This walkthrough gives everyone a solid base to start with. When you’re ready, you can tweak the entities directly to specify the exact configuration you’d like.

We’d love your feedback! Please share with us on Twitter, or reach out directly.

Addendums

Do you have a video to walk me through this? You bet.

Does this work with Helm Charts? Yes! We’re currently piloting Helm Charts with a limited set of users. Ping us if you’d like to try it early.

Does this work with any Kubernetes cluster? It should work with any Kubernetes cluster and is tested for Kubernetes 1.5 forward.

Can I deploy Codefresh in my own data center? Sure, Codefresh is built on top of Kubernetes using Helm Charts. Codefresh cloud is free for open source, and 200 builds/mo. Codefresh on prem is currently for enterprise users only.

Won’t the database be wiped every time we update?
Yes, in this case we skipped creating a persistent volume. It’s a bit more work to get the persistent volume configured, if you’d like, feel free to reach out and we’re happy to help!

Monday, October 30, 2017

Enforcing Network Policies in Kubernetes

Editor's note: this post is part of a series of in-depth articles on what's new in Kubernetes 1.8. Today’s post comes from Ahmet Alp Balkan, Software Engineer, Google.

Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using network policies. This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.

Network policy: What does it mean?

In a Kubernetes cluster configured with default settings, all pods can discover and communicate with each other without any restrictions. The new Kubernetes object type NetworkPolicy lets you allow and block traffic to pods.

If you’re running multiple applications in a Kubernetes cluster or sharing a cluster among multiple teams, it’s a security best practice to create firewalls that permit pods to talk to each other while blocking other network traffic. Networking policy corresponds to the Security Groups concepts in the Virtual Machines world.

How do I add Network Policy to my cluster?

Networking Policies are implemented by networking plugins. These plugins typically install an overlay network in your cluster to enforce the Network Policies configured. A number of networking plugins, including Calico, Romana and Weave Net, support using Network Policies.

Google Container Engine (GKE) also provides beta support for Network Policies using the Calico networking plugin when you create clusters with the following command:

gcloud beta container clusters create --enable-network-policy


How do I configure a Network Policy?

Once you install a networking plugin that implements Network Policies, you need to create a Kubernetes resource of type NetworkPolicy. This object describes two set of label-based pod selector fields, matching:
  1. a set of pods the network policy applies to (required)
  2. a set of pods allowed access to each other (optional). If you omit this field, it matches to no pods; therefore, no pods are allowed. If you specify an empty pod selector, it matches to all pods; therefore, all pods are allowed.

Example: restricting traffic to a pod

The following example of a network policy blocks all in-cluster traffic to a set of web server pods, except the pods allowed by the policy configuration.


To achieve this setup, create a NetworkPolicy with the following manifest:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
 name: access-nginx
spec:
 podSelector:
   matchLabels:
     app: nginx
 ingress:
 - from:
   - podSelector:
       matchLabels:
         app: foo

Once you apply this configuration, only pods with label app: foo can talk to the pods with the label app: nginx. For a more detailed tutorial, see the Kubernetes documentation.

Example: restricting traffic between all pods by default

If you specify the spec.podSelector field as empty, the set of pods the network policy matches to all pods in the namespace, blocking all traffic between pods by default. In this case, you must explicitly create network policies whitelisting all communication between the pods.



You can enable a policy like this by applying the following manifest in your Kubernetes cluster:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: default-deny
spec:
 podSelector:

Other Network Policy features

In addition to the previous examples, you can make the Network Policy API enforce more complicated rules:

  • Egress network policies: Introduced in Kubernetes 1.8, you can restrict your workloads from establishing connections to resources outside specified IP ranges.
  • IP blocks support: In addition to using podSelector/namespaceSelector, you can specify IP ranges with CIDR blocks to allow/deny traffic in ingress or egress rules.
  • Cross-namespace policies: Using the ingress.namespaceSelector field, you can enforce Network Policies for particular or for all namespaces in the cluster. For example, you can create privileged/system namespaces that can communicate with pods even though the default policy is to block traffic.
  • Restricting traffic to port numbers: Using the ingress.ports field, you can specify port numbers for the policy to enforce. If you omit this field, the policy matches all ports by default. For example, you can use this to allow a monitoring pod to query only the monitoring port number of an application.
  • Multiple ingress rules on a single policy: Because spec.ingress field is an array, you can use the same NetworkPolicy object to give access to different ports using different pod selectors. For example, a NetworkPolicy can have one ingress rule giving pods with the kind: monitoring label access to port 9000, and another ingress rule for the label app: foo giving access to port 80, without creating an additional NetworkPolicy resource.

Learn more

Friday, October 27, 2017

Using RBAC, Generally Available in Kubernetes v1.8

Editor's note: this post is part of a series of in-depth articles on what's new in Kubernetes 1.8. Today’s post comes from Eric Chiang, software engineer, CoreOS, and SIG-Auth co-lead.

Kubernetes 1.8 represents a significant milestone for the role-based access control (RBAC) authorizer, which was promoted to GA in this release. RBAC is a mechanism for controlling access to the Kubernetes API, and since its beta in 1.6, many Kubernetes clusters and provisioning strategies have enabled it by default.

Going forward, we expect to see RBAC become a fundamental building block for securing Kubernetes clusters. This post explores using RBAC to manage user and application access to the Kubernetes API.

Granting access to users


RBAC is configured using standard Kubernetes resources. Users can be bound to a set of roles (ClusterRoles and Roles) through bindings (ClusterRoleBindings and RoleBindings). Users start with no permissions and must explicitly be granted access by an administrator.

All Kubernetes clusters install a default set of ClusterRoles, representing common buckets users can be placed in. The “edit” role lets users perform basic actions like deploying pods; “view” lets a user observe non-sensitive resources; “admin” allows a user to administer a namespace; and “cluster-admin” grants access to administer a cluster.


$ kubectl get clusterroles
NAME            AGE
admin           40m
cluster-admin   40m
edit            40m
# ...

view            40m

ClusterRoleBindings grant a user, group, or service account a ClusterRole’s power across the entire cluster. Using kubectl, we can let a sample user “jane” perform basic actions in all namespaces by binding her to the “edit” ClusterRole:

$ kubectl create clusterrolebinding jane --clusterrole=edit --user=jane
$ kubectl get namespaces --as=jane
NAME          STATUS    AGE
default       Active    43m
kube-public   Active    43m
kube-system   Active    43m
$ kubectl auth can-i create deployments --namespace=dev --as=jane
yes

RoleBindings grant a ClusterRole’s power within a namespace, allowing administrators to manage a central list of ClusterRoles that are reused throughout the cluster. For example, as new resources are added to Kubernetes, the default ClusterRoles are updated to automatically grant the correct permissions to RoleBinding subjects within their namespace.

Next we’ll let the group “infra” modify resources in the “dev” namespace:

$ kubectl create rolebinding infra --clusterrole=edit --group=infra --namespace=dev
rolebinding "infra" created

Because we used a RoleBinding, these powers only apply within the RoleBinding’s namespace. In our case, a user in the “infra” group can view resources in the “dev” namespace but not in “prod”:

$ kubectl get deployments --as=dave --as-group=infra --namespace dev
No resources found.
$ kubectl get deployments --as=dave --as-group=infra --namespace prod
Error from server (Forbidden): deployments.extensions is forbidden: User "dave" cannot list deployments.extensions in the namespace "prod".

Creating custom roles

When the default ClusterRoles aren’t enough, it’s possible to create new roles that define a custom set of permissions. Since ClusterRoles are just regular API resources, they can be expressed as YAML or JSON manifests and applied using kubectl.

Each ClusterRole holds a list of permissions specifying “rules.” Rules are purely additive and allow specific HTTP verb to be performed on a set of resource. For example, the following ClusterRole holds the permissions to perform any action on "deployments”, “configmaps,” or “secrets”, and to view any “pod”:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: deployer
rules:
- apiGroups: ["apps"]
 resources: ["deployments"]
 verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]

- apiGroups: [""] # "" indicates the core API group
 resources: ["configmaps", "secrets"]
 verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]

- apiGroups: [""] # "" indicates the core API group
 resources: ["pods"]
 verbs: ["get", "list", "watch"]

Verbs correspond to the HTTP verb of the request, while the resource and API groups refer to the the resource being referenced. Consider the following Ingress resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: test-ingress
spec:
 backend:
   serviceName: testsvc
   servicePort: 80

To POST the resource, the user would need the following permissions:

rules:
- apiGroups: ["extensions"] # "apiVersion" without version
 resources: ["ingresses"]  # Plural of "kind"
 verbs: ["create"]         # "POST" maps to "create"

Roles for applications

When deploying containers that require access to the Kubernetes API, it’s good practice to ship an RBAC Role with your application manifests. Besides ensuring your app works on RBAC enabled clusters, this helps users audit what actions your app will perform on the cluster and consider their security implications.

A namespaced Role is usually more appropriate for an application, since apps are traditionally run inside a single namespace and the namespace's resources should be tied to the lifecycle of the app. However, Roles cannot grant access to non-namespaced resources (such as nodes) or across namespaces, so some apps may still require ClusterRoles.

The following Role allows a Prometheus instance to monitor and discover services, endpoints, and pods in the “dev” namespace:

kind: Role
metadata:
 name: prometheus-role
 namespace: dev
rules:
- apiGroups: [""] # "" refers to the core API group
 Resources: ["services", "endpoints", "pods"]
 verbs: ["get", "list", "watch"]

Containers running in a Kubernetes cluster receive service account credentials to talk to the Kubernetes API, and service accounts can be targeted by a RoleBinding. Pods normally run with the “default” service account, but it’s good practice to run each app with a unique service account so RoleBindings don’t unintentionally grant permissions to other apps.

To run a pod with a custom service account, create a ServiceAccount resource in the same namespace and specify the `serviceAccountName` field of the manifest.

apiVersion: apps/v1beta2 # Abbreviated, not a full manifest
kind: Deployment
metadata:
 name: prometheus-deployment
 namespace: dev
spec:
 replicas: 1
 template:
   spec:
     containers:
     - name: prometheus
       image: prom/prometheus:v1.8.0
       command: ["prometheus", "-config.file=/etc/prom/config.yml"]
   # Run this pod using the "prometheus-sa" service account.
   serviceAccountName: prometheus-sa
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: prometheus-sa
 namespace: dev

Get involved

Development of RBAC is a community effort organized through the Auth Special Interest Group, one of the many SIGs responsible for maintaining Kubernetes. A great way to get involved in the Kubernetes community is to join a SIG that aligns with your interests, provide feedback, and help with the roadmap.

About the author

Eric Chiang is a software engineer and technical lead of Kubernetes development at CoreOS, the creator of Tectonic, the enterprise-ready Kubernetes platform. Eric co-leads Kubernetes SIG Auth and maintains several open source projects and libraries on behalf of CoreOS.