English

Share with friends

Note

In this Kubeadm tutorial, we'll explore everything you need to know about Kubeadm commands and how to use them to create clusters & manage cluster components in Kubernetes (K8s).

How to Set up & Manage Kubernetes Cluster using Kubeadm? cover image

You'll not only learn how to create your own Kubernetes cluster using Kubeadm, master the art of managing cluster components, and discover Kubeadm's best practices.

Join us on this exciting journey into the world of Kubeadm, where you'll unlock the power to orchestrate containers with ease. Let's dive in and become Kubeadm experts together!

Before we dive deep into kubeadm, what exactly is self-hosted Kubernetes (K8s)?

What is Self-hosted Kubernetes

Self-hosted Kubernetes, also known as a "bare metal" or "on-premises" Kubernetes deployment, refers to running Kubernetes directly on physical or virtual machines without relying on a managed Kubernetes service provided by a cloud provider (such as EKS by AWS).

Here are some pros and cons of self-hosted Kubernetes:

Pros of Self-hosted Kubernetes

1. Full Control and Customization

With self-hosted Kubernetes, you have complete control over the cluster configuration, networking, and infrastructure.

This allows you to customize and optimize the cluster according to your specific requirements and infrastructure capabilities.

2. Cost Efficiency

Self-hosting Kubernetes can be more cost-effective, especially for long-term deployments, compared to using managed Kubernetes services that often incur additional fees.

Self-hosting allows you to leverage existing hardware resources and avoid the overhead costs of managed services. (and save a ton of money)

3. Security and Compliance

Self-hosting Kubernetes gives you direct control over security measures, such as network policies, encryption, access controls, and compliance requirements.

This allows you to implement the best Kubernetes security practices specific to your organization's needs and regulatory standards. (log4j, ring any bells?)

4. Resource Allocation

Self-hosted Kubernetes allows you to allocate resources exclusively to your cluster without sharing them with other tenants.

This enables better resource utilization and avoids potential performance issues that may arise from resource contention in a shared environment.

Cons of Self-hosted Kubernetes

1. Infrastructure Management

Self-hosting Kubernetes requires expertise in managing and maintaining the underlying infrastructure.

You are responsible for provisioning, configuring, and monitoring the servers, networking, storage, and other components of the cluster.

This can be time-consuming and require dedicated resources and additional hiring.

2. Scalability and Elasticity

Scaling self-hosted Kubernetes clusters can be more complex compared to managed services that offer automated scaling capabilities.

You need to plan and provision resources in advance to handle peak loads, and adding or removing nodes may require manual intervention.

3. Operational Complexity

Self-hosted Kubernetes introduces additional operational complexities, such as managing upgrades, patches, backups, and high availability.

These tasks require careful planning, testing, and coordination to ensure minimal disruption to the cluster and applications or else increase in production downtime.

What is Kubeadm?

Kubeadm is a command-line tool that simplifies the process of setting up and managing a Kubernetes cluster, making it easier for technical community members to dive into the world of container orchestration.

Think of kubeadm as your trusty assistant who takes care of the nitty-gritty details of cluster setup, allowing you to focus on the bigger picture.

It automates complex tasks like configuring essential components, generating certificates, and ensuring your cluster follows best practices.

Also Read: What is Configuration as Code?

Kubeadm Examples

Let's take a look at some examples and commands to better understand how kubeadm works.

1. Initializing the Control Plane

Example Command #1: kubeadm init

Here's how to use the above command.

kubeadm init --pod-network-cidr=192.168.0.0/16

This command initializes the control plane on the master node. It generates necessary certificates, sets up the API server, etcd, and other vital components.

The --pod-network-cidr flag specifies the IP address range for the pod network in the cluster.

2. Joining Worker Nodes

Example Command #2: kubeadm join

kubeadm join <master-node-ip>:<master-node-port> --token <token> --discovery-token-ca-cert-hash <hash>

This command joins a worker node to the cluster.

You need to provide the IP and port of the master node, along with a token and discovery token CA certificate hash, which can be obtained during the control plane initialization.

3. Resetting a Cluster

Command: kubeadm reset

kubeadm reset --force

This command resets a cluster, removing all the installed components and returning the node to its pre-Kubernetes state.

The --force flag ensures a thorough reset.

4. Upgrading Kubernetes Version

Command: kubeadm upgrade

Here's how to use the above kubeadm command.

kubeadm upgrade plan

This command helps you upgrade your Kubernetes version. It checks for available upgrades and provides a plan for upgrading the control plane components.

Also Read: A Complete List of 139 Kubectl Commands

Top 20 Most Common Kubeadm Commands

Here is a list of commonly used kubeadm commands, followed by a relevant example.

kubeadm init

kubeadm init --pod-network-cidr=192.168.0.0/16

In the above example, the command initializes the control plane on the master node, generating certificates and setting up essential components.

The --pod-network-cidr flag specifies the pod network IP address range.

kubeadm join

kubeadm join <master-node-ip>:<master-node-port> --token <token> --discovery-token-ca-cert-hash <hash>

The above command joins a worker node to the cluster by connecting to the specified master node using a token and discovery token CA certificate hash.

kubeadm reset

kubeadm reset --force

The above command resets a node, removing all installed Kubernetes components and returning it to its pre-Kubernetes state.

The --force flag ensures a thorough reset.

kubeadm upgrade

kubeadm upgrade plan

The above command provides an upgrade plan, checking for available upgrades for the control plane components.

kubeadm token

kubeadm token create

This kubeadm command manages authentication tokens used for joining nodes to the cluster. This example generates a new token.

kubeadm config

kubeadm config print init-defaults

The above command manages kubeadm configuration files. This example prints the default configuration for cluster initialization.

kubeadm version

kubeadm version

The above command prints the version of kubeadm.

kubeadm config images

kubeadm config images list

Use the above command to print a list of images required for the current Kubernetes version.

This command helps you determine the required container images for manual container image management.

Replace list with pull to pull config images.

kubeadm token create

kubeadm token create --print-join-command

The above command generates a new token that can be used to join nodes to the cluster.

The --print-join-command flag displays the join command to be executed on worker nodes.

kubeadm token list

kubeadm token list

The above command lists the active tokens along with their associated creation time and expiration.

kubeadm token delete

kubeadm token delete <token_value>

Use the above command to delete a specific token, rendering it unusable for node joins.

kubeadm config migrate

kubeadm config migrate --old-config kubeadm.conf --new-config kubeadm.yaml

The above command migrates a configuration file from an old version to a new version, enabling smooth upgrades of the configuration.

kubeadm certs

kubeadm certs check-expiration
kubeadm certs certificate-key

The above commands check the expiration status of certificates used by the control plane components and provide warnings for certificates nearing expiration.

The second command generates the key.

kubeadm init phase

kubeadm init phase kubelet-start --config config.yaml

From an InitConfiguration file, the above command creates a dynamic environment file with kubelet flags.

kubeadm join phase

kubeadm join phase control-plane-prepare

The above command executes a specific phase during the joining of a worker node to the control plane.

This example runs the control-plane-prepare phase, which prepares the worker node to become a control plane node.

kubeadm kubeconfig

kubeadm kubeconfig user --client-name=foo --config=bar

Use the above command to output a kubeconfig file for an additional user named foo using a kubeadm config file bar.

kubeadm reset phase

kubeadm reset phase preflight

The above command executes a specific phase during the reset process of a node.

This example runs the preflight phase, which performs pre-reset checks before removing the Kubernetes components.

kubeadm upgrade plan

kubeadm upgrade plan

The above command displays an upgrade plan for the control plane components, showing available versions and any required actions for the upgrade.

kubeadm upgrade apply

kubeadm upgrade apply v1.22.2

Use the above command to apply a specific Kubernetes version to the control plane, upgrading its components to the specified version.

kubeadm upgrade node

kubeadm upgrade node

This command upgrades the kubelet and kube-proxy on a worker node to match the control plane's version.

Also Read: What are init containers in Kubernetes?

How to Create Kubernetes Cluster Using Kubeadm?

To create a Kubernetes cluster using kubeadm, you need to ensure that your environment meets the necessary prerequisites.

Here's a step-by-step guide on how to create a Kubernetes cluster using kubeadm, including the prerequisites, detailed instructions, examples, and commands.

Prerequisites

  1. Two or more machines running a supported Linux distribution (e.g., Ubuntu, CentOS, or Red Hat Enterprise Linux) with Docker installed.

  2. 2 GiB or more of RAM per machine is recommended; anything less leaves limited room for your software.

  3. Disable swap space on all machines.

  4. Set up a unique hostname, MAC address, and product_uuid for each machine.

  5. All machines in the cluster have complete network connectivity. You can connect to a public or private network.

  6. Open necessary ports (e.g., 6443, 2379-2380, 10250, 10251, 10252) in your firewall.

Step 1: Install Docker and Kubernetes Tools

On all machines, install Docker using the Docker guide for your Linux distribution.

Install kubeadm, kubelet, and kubectl using the following commands on all machines:

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  sudo apt-get update
  sudo apt-get install -y kubelet kubeadm kubectl
  sudo apt-mark hold kubelet kubeadm kubectl

Step 2: Initialize the Kubernetes Control Plane

On the desired control plane node, initialize the cluster using kubeadm init:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
Note

Adjust the --pod-network-cidr flag if you plan to use a different pod network.

After the initialization is complete, the command will output a kubeadm join command with a token and hash.

Make sure to copy this command as it will be used to join worker nodes to the cluster later.

Step 3: Set Up Cluster Configuration for kubectl

On the control plane node, create the necessary directory and copy the kubeconfig file:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 4: Deploy a Pod Network Addon

On the control plane node, deploy a pod network add-on.

For example, you can use Calico:

kubectl apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml

Step 5: Join Worker Nodes in the Cluster

On each worker node, run the kubeadm join command that was generated during the control plane initialization (from Step 2).

This command typically looks like this:

sudo kubeadm join <control_plane_IP>:6443 --token <token> --discovery-token-ca-cert-hash <hash>

Step 6: Verify the Cluster

On the control plane node, check the status of the cluster using kubectl:

kubectl get nodes

Congratulations! You have successfully created a Kubernetes cluster using kubeadm. You can now deploy and manage your applications on the cluster.

How to Manage Cluster Components with kubeadm

To manage cluster components with kubeadm, you can use various commands to perform tasks such as upgrading the cluster, adding or removing nodes, and managing the control plane.

Here's a step-by-step guide on how to manage cluster components with kubeadm, including detailed commands with code comments.

Step 1: Upgrading the Cluster

Check the current version of Kubernetes on the control plane node:

kubectl version --short

Upgrade kubeadm, kubelet, and kubectl on all nodes to match the desired Kubernetes version:

# Upgrade kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=<desired_version>
# Upgrade kubelet and kubectl
sudo apt-get update
sudo apt-get install -y kubelet=<desired_version> kubectl=<desired_version>
# Hold the packages to prevent automatic upgrades
sudo apt-mark hold kubeadm kubelet kubectl

On the control plane node, initiate the upgrade process:

sudo kubeadm upgrade plan
sudo kubeadm upgrade apply <desired_version>

Upgrade the kubelet configuration on all nodes:

sudo kubeadm upgrade node

Verify the upgrade status:

kubectl get nodes
kubectl version --short

Step 2: Adding Worker Nodes

On the control plane node, generate a new kubeadm join command:

sudo kubeadm token create --print-join-command

On the worker node(s), run the kubeadm join command to join them to the cluster:

sudo kubeadm join <control_plane_IP>:6443 --token <token> --discovery-token-ca-cert-hash <hash>

Verify the status of the new worker nodes:

kubectl get nodes

Step 3: Removing Nodes

Drain the node you want to remove:

kubectl drain <node_name> --ignore-daemonsets

On the control plane node, remove the node:

sudo kubeadm reset

On the control plane node, delete the node from the cluster:

kubectl delete node <node_name>

Step 4: Managing Control Plane Components

Upgrade the control plane components on the control plane node:

sudo apt-get update
  sudo apt-get install -y kubeadm=<desired_version> kubelet=<desired_version> kubectl=<desired_version>
  sudo kubeadm upgrade plan
  sudo kubeadm upgrade apply <desired_version>
  sudo systemctl restart kubelet

Verify the upgrade status:

kubectl get nodes
kubectl version --short

Drain the control plane node you want to remove:

kubectl drain <control_plane_node> --ignore-daemonsets

On the control plane node, remove the control plane components using the following kubeadm command.

sudo kubeadm reset

On the control plane node, delete the control plane node from the cluster:

kubectl delete node <control_plane_node>
Note

Managing cluster components with kubeadm requires caution, as it can impact the stability and availability of the cluster.

Ensure you have a backup plan and follow the official Kubernetes documentation for detailed instructions specific to your setup and requirements.

Also Read: The Only Kubernetes Secrets Tutorial You'll Ever Read

Kubeadm Best Practises

When used for its intended purposes, Kubeadm is an excellent tool.

Here are three best practices to remember in order to use the proper tool for the job and get the most out of kubeadm.

Use kubeadm only for production clusters that need autoscaling.

In general, kubeadm should not be utilized for production clusters because it lacks node and cluster autoscaling capability.

This is due to the fact that node autoscaling necessitates controlling underlying infrastructure and hardware, which kubeadm delegated to other tools.

Backup etcd on a regular basis.

Kubeadm does not have a multi-etcd cluster by default for storing cluster state. Make regular backups of etcd in case something goes wrong.

Keep track of machines/nodes.

Kubeadm cannot power off machines that are not in use.

So, in order to optimize cost in Kubernetes cluster using kubeadm, you'll need to use an external solution to track worker nodes and their resource utilization.

Share with friends

Priyansh Khodiyar's profile

Written by Priyansh Khodiyar

Priyansh is the founder of UnYAML and a software engineer with a passion for writing. He has good experience with writing and working around DevOps tools and technologies, APMs, Kubernetes APIs, etc and loves to share his knowledge with others.

Further Reading

Life is better with cookies 🍪

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt out if you wish. Cookie Policy