Original text
Rate this translation
Your feedback will be used to help improve Google Translate

Cookie Consent

By clicking “Accept Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

General Published on: Fri Feb 10 2023

Understanding Kubernetes

What is Kubernetes?

Before trying to understand Kubernetes, let’s get an idea of what a container is:

Generally, a container is a unit of executable software that packages application code with libraries and dependencies. A container does not need to include a guest OS every time and can, instead, utilize the features of the host OS, making them small, fast, and portable that can be run anywhere.

Initially developed by Google, Kubernetes is an open-source orchestration system for containers. It is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. Kubernetes manages containers, regardless of whether they are Docker containers or some other technology. This means that it helps you manage applications that are composed of hundreds or possibly thousands of containers, and also it helps you manage them in different environments, such as physical machines, virtual machines, cloud environments, or even hybrid deployment environments.

Why do we need a Container Orchestration Tool: Kubernetes?

The rise of microservices caused increased usage of container technologies. The containers offer the perfect host for small independent applications since containers are a good way to bundle and run your applications. The microservice technology resulted in applications that are now comprised of hundreds or thousands of containers. It becomes very complex and sometimes even impossible to manage those loads of containers across multiple environments with scripts and self-made tools.

It led to container orchestration technologies that ensure no downtime and manage the containers that run the applications.

What features do container orchestration tools offer? And Why Kubernetes?

As companies began adopting containers, often as part of modern, cloud-native architectures, the simplicity of the individual container began colliding with the complexity of managing hundreds or thousands of containers across a distributed system.

To cope up with these challenges, container orchestration emerged as a way to handle large numbers of containers through their lifecycle, including:

·        Redundancy

·        Provisioning

·        Scaling and load balancing 

·        Health monitoring

·        Moving between physical hosts

·        Resource allocation

Although many container orchestration platforms (such as Apache Mesos, Nomad, and Docker Swarm) were created to address these challenges, Kubernetes quickly became the most popular container orchestration platform.

Kubernetes actually guarantees the following features: first is high availability. High availability means that the application has no downtime, so it’s always accessible to the users. The second is scalability which refers to how fast the application loads and how fast the application responds to the users. The third is disaster recovery which means that if infrastructure has some problems like data is lost or the servers fail. In this case, the infrastructure needs to restore the data to the latest stable state so that data loss can be avoided and the containerized application can resume from the latest state after the recovery.

Kubernetes: Basic Architecture and Components

Kubernetes cluster comprises at least one Master node and a couple of worker nodes connected to the Master nodes. Each worker node has a kubelet process running on it, and kubelet is actually a Kubernetes process that ensures communication between the cluster. It executes some tasks on those nodes like running application processes; each worker node has containers of different applications deployed on it, so depending on how the workload is distributed, you would have a different number of containers running on worker nodes. Worker nodes are where the actual work is happening, so this is where your applications are running. Now the question arises what is running on the Master node?

In actuality, the master node runs several Kubernetes processes that are absolutely necessary for running and managing the cluster. An example of such processes includes an API server that is also a container.

An API Server – It can be considered as the entry point to the Kubernetes cluster. So this is the process by which the different Kubernetes clients will talk to UI if you’re using a Kubernetes dashboard, an API if you’re using some scripts and automating technologies, and a command-line tool, so all of these will talk to the API server.

Controllers – Another process running on the master node is a Controller Manager, which keeps track of what is happening in the cluster, for example, if something needs to be repaired or if a container has died and needs to be restarted.

Scheduler – It is an intelligent process that generally decides the scheduling of containers based on the available server resources and the workload of each node. Another key component of the whole cluster is ETCD key-value storage which generally holds the current state of the Kubernetes cluster. So it has all the configuration data inside and all the status data of each node and each container inside that node.

The virtual network is a very important component of Kubernetes that enables worker and master nodes to talk to each other. In simple words, the virtual network converts all the nodes inside the cluster into a powerful machine that has the sum of all the resources of the individual. One thing to be noted here is that workers who know they have the most load because they are running the applications inside of it usually are much bigger and have more resources because they will be running hundreds of containers inside of them. In contrast, the master node will be running just a handful of master processes.

The master node is much more important than the individual worker nodes because if, for example, you lose a master node access you will not be able to access the cluster anymore and that means you absolutely have to have a backup of your master at any time so in production environments usually, you would have at least two masters inside of your Kubernetes cluster but in more cases, of course, you’re going to have multiple masters where if one master node is down the cluster continues to function smoothly because you have other masters available.

Now, look at some of Kubernetes’ basic concepts like pods and containers in Kubernetes.

Pod is the smallest unit that you as a Kubernetes user will configure and interact with. Within pod is basically a wrapper of a container and on each worker node you’re gonna have multiple pods and inside of a pod you can actually have multiple containers usually per application, you would have one pod so the only time you would need more than one containers inside of a pod is when you have the main application that needs some helper containers so usually you would have one pod per application so a database, for example, would be one pod a message broker will be another pod a server will be again another pod.

What’s Next?

This is not all about Kubernetes, but I hope I have given you a basic idea and its application. If you want to gain in-depth knowledge:

·        Learn about Nodes

·        Learn about Controllers

·        Learn about kube-scheduler

·        Read etcd’s official documentation