Kubernetes Best Practices — Part 1

December 24, 2018

iauro Team

Contributing immensely to the global software solutions ecosystem. DevOps | Microservices | Microfrontend | DesignThinking LinkedIn

Kubernetes Best Practices — Part 1

by Krunal Chaudhari, Team Lead, iauro Systems Pvt. Ltd.

Kubernetes Best Practices

Why Containers?

IT delivers thousands of applications to meet the needs of your business. Many have different requirements. They also use different languages, databases, and tools. To deploy, configure, manage, and maintain this complexity takes people, expertise, infrastructure and architecture. This translates to time and money.

Today, there is a better way to package applications and their necessary components with Linux containers. Containers help organizations become more consistent and agile. They abstract the underlying host operating system. As a result, applications can be packaged with all of their dependencies so your developers can choose the environments and tools that best suit their projects. And, operations teams can deliver applications anywhere in a consistent way.

Kubernetes Best Practices

Need of Orchestration:

It is easy to see the benefits when you deploy your first container-based app, but what happens when you are developing, deploying, and managing thousands of containers each day? Delivering and maintaining container-based applications at scale can be complicated. On top of this, when public cloud providers bills you for the services that you use ( CPU’s, storage ) you need to make sure that there are no idle machines. Also, in some cases there is need to automatically spin up more machines whenever there is need for extra CPUs and memory as well as turn them down when load lightens.

Kubernetes Best Practices

Why Kubernetes?

IT delivers thousands of applications to meet the needs of your business. Many have different requirements. They also use different languages, databases, and tools. To deploy, configure, manage, and maintain this complexity takes people, expertise, infrastructure and architecture. This translates to time and money.

Today, there is a better way to package applications and their necessary components with Linux containers. Containers help organizations become more consistent and agile. They abstract the underlying host operating system. As a result, applications can be packaged with all of their dependencies so your developers can choose the environments and tools that best suit their projects. And, operations teams can deliver applications anywhere in a consistent way.

  1. Small containers:

Docker provides great platform to create containers, by specifying base image adding changes one can simply create container. Most of the default images are derived from debian. While the base image is great for compatibility and easy onboarding these can add hundreds of MBs of additional overhead to your container. For example simple nodejs and go hello world apps are around 300 MB and your application is probably only a few MB in size so all additional overhead is wasted space. There are two ways to overcome this

1. Small Base Images

2. Builder Pattern

Small base images are easy way to reduce the size of container. For example, building node.js application using node 8 alpine image instead of using default node 8 image reduces size by 10 times.

Other way is the use the builder pattern to reduce container size drastically. For compiled languages, source code is turned into compiled code with compilation step often requires tools that are not needed to actually run the code, this means remove these tools from the final container completely. Lets see the example of go application

Following example, take the base golang alpine image, create directory for the code, copies source, build the source and then finally starts the app. This image contains compiler and other go tools. This image will be of around 320 MB

Kubernetes Best Practices

Using build pattern, we are creating two containers to have small container size with step manner. In the first FROM statement, we are using golang alpine image to build the code with AS clause to maintain step name. Second FROM statement, will use base alpine linux image which will be of few MBs. After that will copy the file from first container, and hence will not have any other go language compile tools. This dockerfile referred multistage dockerfile.

Kubernetes Best Practices

Above example will create the container of 11 MB. This will improve the performance to build, push and pull images.

For example let’s say you have a three node cluster and one of the nodes crashes, if you’re using a kubernetes engine the system will automatically spin up new node and new node will be completely fresh and we have to pull all our containers before it can start work if it takes too long to pull the containers this is just time where your cluster isn’t performing, so minimizing pool times becomes key. Spinning up the smaller container is much faster than the large container, using small common base images for your containers significantly speeds up the deployment times.

2. Resource Limits:
Kubernetes engine schedules the pod, if and only if the container have enough resources to run in the environment. If you schedule a large memory or CPU consuming app on a node with limited resources, it is possible that the node runs out of memory or CPU and application will stop working. Kubernetes uses, requests and limits to control resources such as memory and CPU.

Request are the resources that container will always get while running in the environment. On the other hand limits make sure container never goes above the mentioned value. Hence, scheduling is based on the request instead of the limit. That is, if the pod is successfully scheduled, the pod can be allocated to the resource of the specified request. Whether the resource used by the pod exceeds the specified limit value depends on whether the resource can be compressed.

Let’s go through example:

Kubernetes Best Practices

CPU resources defined in the milicores. If you need one core, you can use CPU resource value as 1000m, where as if you need 1⁄2 of CPU, you can specify this as 500m. If you put a CPU value which is larger than the core count of your biggest node in your cluster, then your pod will never be scheduled.

Pods can guarantee the number of CPUs they request, and they may or may not get extra CPU time (depending on other jobs that are running). Because currently CPU isolation is at the container level and not at the pod level.

Memory resources are defined in bytes. Just like CPU, if you put a memory request value which is larger than the amount of memory on your nodes, the pod will never be scheduled. Unlike CPU resources, if a container goes past its memory limit, it will be terminated.

Key areas for setting resource requests and limits:

  1. ResourceQuota
  2. LimitRange

2.1 ResourceQuota:
ResourceQuota object is used to define the usage limit of all resources in a namespace. If a new pod (or other object) exceeds the resource quota, it will not be created.

Kubernetes Best Practices

2.2 LimitRange:
LimitRange object is a resource limit that limits a Pod of a namespace. Limit range works on container level, so this helps to create any size constraint container.

Kubernetes Best Practices

Above is an expanded understanding of some of the best practices of Kubernetes. As a management tool, it makes your containers so lightweight that you can literally ship them around as a deployable unit from development to staging and eventually production environments. But various challenges can be all sorts of fun and interesting things.

Click To Read Kubernetes Best Practices Part 2

Kubernetes Best Practices

Author: Krunal Chaudhari, Team Lead, iauro Systems Pvt. Ltd

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe for updates