Last Updated on June 29, 2024 by KnownSense
Introduction
In this article, we will read about major components of both Kubernetes as a cluster and how it manages apps. We’ll start with the infrastructure bits first, like the control plane nodes and the workers. Then we’ll switch back and we’ll look at the bits of Kubernetes that we use to deploy and manage apps. We will also see how pods are the most fundamental and atomic unit of scheduling on Kubernetes. Then, we’ll see how to expose apps on a network with Kubernetes services, and we’ll look as well at how Kubernetes deployments let us do really cool things with our apps, like scaling, and rolling updates, version rollbacks Kubernetes API is and the API server.
Big Picture View
So at the highest level, Kubernetes is an orchestrator of microservices apps. Microservices app is just jargon for an application that’s made up of lots of small and independent services that work together to create a useful application.
What’s this orchestrator buzzword? So we start out with an app made up of multiple services. Each is packaged as a container, but each one is different with a different job within the overall app. So we’ve got load balancers, web servers, logging. Kubernetes comes along a bit like the coach in the football, and it organizes everything into a useful app. So, things like on the right networks and port, the right credentials and what we end up with is a useful application made up of lots of small specialized parts. We call this what Kubernetes is doing orchestration. It’s orchestrating all of these pieces to work together, kind of as a team. Kubernetes also reacts to real‑time events. So, let’s say one of the app instances up fails, and we drop from three to just two. No sweat, it’s the job of Kubernetes to notice this and spin up a new one, taking us back to three. At peak times, such as year-end reporting when demand surges, the system can dynamically scale to meet the increased load. Kubernetes, for instance, automatically provisions additional containers for the necessary microservices, ensuring that the application can handle the higher demand efficiently.
So, we start out with an app. We package it up and give it to the Kubernetes cluster. The cluster is made up of one or more control play nodes and a bunch of workers. The control plane is the brains of the cluster, that does all this stuff like scheduling tasks, monitoring stuff, and responding to events and all of that jazz. Then, the workers are where we run our user and business apps. That’s our physical infrastructure stuff. A cluster of control play nodes and worker nodes. But at the start, we also said something like we package the application and we give it to the cluster. So, to do that, we take the app code and we containerize it. Then we take that and then we wrap it in a pod. But then if we want things like scaling and self‑healing, we wrap that pod in a deployment. We define all of this in a Kubernetes YAML file, which is basically just a way to describe what the app should look like for Kubernetes. So, things like which container image to use, which ports, what networks, how many replicas, all of that stuff in a file. Then we give the file to Kubernetes, and Kubernetes makes it all happen. Let’s we start getting deeper.
Key Components and Concepts in Kubernetes
Let’s see essential Kubernetes components and concepts of K8s. For a more in-depth understanding, there is a dedicated page for each topic where you can read all the details.
- Control Plane Nodes: These nodes manage the Kubernetes cluster. They handle the orchestration of containers, monitor the state of the cluster, and manage the lifecycle of the containers.
- Worker Nodes: These nodes run the actual application workloads. They receive instructions from the control plane and execute the necessary tasks, such as running containers and handling networking.
- Pod: The smallest and simplest Kubernetes object. A pod represents a single instance of a running process in a cluster, and it can contain one or more containers that share resources such as storage and networking.
- Deployments: A higher-level abstraction that manages a set of identical pods, ensuring the desired number of replicas are running. Deployments provide features like rolling updates and rollbacks.
- API Server: The central management entity that exposes the Kubernetes API. It processes REST operations, validates them, and updates the state of the cluster accordingly.
- API: The interface through which users, developers, and external tools interact with the Kubernetes control plane. It enables communication between various components within the cluster.
Conclusion
In this article, we explored the major components of Kubernetes and how it manages applications. Overall, Kubernetes acts as an orchestrator, efficiently managing microservices to ensure they work together seamlessly, handle real-time events, and dynamically scale to meet demand. For more detailed information, each key component and concept has a dedicated page.