English

Share with friends

Note

Want to know what Kubernetes DaemonSets are?, why you need them, examples, & best practices? This article will cover all of its concepts for you and then some.

Kubernetes DaemonSets - Examples & Best Practices cover image

The most common problem a DevOps Engineer face today is system availability. Software downtime for just a few minutes can impact businesses significantly.

Microservices implement concepts of high availability and make it easier for us to manage this problem along with features like minimal to no downtime.

This is where Kubernetes DaemonSet comes in.

What is DaemonSet in Kubernetes?

DaemonSet in Kubernetes ensures that at least one copy of the pod is running on every node present in the cluster.

It is used to manage background maintenance tasks that need to run on every node, such as logging agents, monitoring agents, or network storage solutions from time to time.

Features of Kubernetes DaemonSets

Here are some of the features of DaemonSets.

  1. Node Deployment: K8s DaemonSet achieves deploying multiple instances of an application in all nodes.
  2. Self Healing: In case of any type of failure in the infrastructure of Kubernetes, Daemonsets recreate the failed pods in nodes and ensures that Kubernetes satisfies the configuration scripts.
  3. Pod Placement: DaemonSets allow the mechanism of labels and taints in Kubernetes. These features dictate which pods can go to which specific nodes, and can be helpful for the physical grouping of applications.
  4. Updating Deployments: Kubernetes DaemonSets support updating the nodes. The update process ensures that only a bunch of nodes are updated at a time, reducing the impact on the availability of the application instances running on the cluster.

Also Read: A Complete Kubeadm Tutorial

How Does Kubernetes DaemonSet Work?

Each component in Kubernetes has different purposes serving different needs for the application.

The job of K8s DaemonSets is to ensure they run the pod in every node we need the application instance to be present in the cluster.

It is actively controlled by a controller.

You can convey which nodes an application pod can get scheduled into this controller. This concept is called taints and tolerations.

According to the specification you provide, (which is often done through a YAML file) desired and present states are observed and updated to the cluster.

As and when a node does not have a matching Pod, the DaemonSet controller will immediately create one.

This automatic approach applies to both new and existing nodes. The pods created by Kubernetes DaemonSets exist until the life of a node.

Once a node is discarded, the pod in that node is automatically destroyed and garbage collected.

By default, a DaemonSet creates a Pod on each node so you can use a node selector/taints to limit the number of acceptable nodes if needed.

Kubernetes DaemonSet 101: How to Work with DaemonSet?

Now that we know what are DaemonSets in Kubernetes and how they work, we can proceed to learn the commands and the different steps you need to follow to work with DaemonSets.

Starting with, how do you create a DaemonSet in Kubernetes?

You must define a YAML or JSON manifest file that explains the desired configuration.

The manifest should include the pod template, which specifies the container image, resource needs, and any additional DaemonSet specifications.

Example of Kubernetes DaemonSet Configuration

Let's look at an example of a DaemonSet configuration file.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: sample-daemonset
spec:
  selector:
    matchLabels:
      app: sample-daemon
  template:
    metadata:
      labels:
        app: sample-daemon
    spec:
      containers:
      - name: sample-daemon-container
        image: sample-docker-image:tag
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "200m"
            memory: "256Mi"

Also Read: Differences between Docker Containers and Images

How to Deploy DaemonSet on a Kubernetes Cluster?

Save the above file to any name that is meaningful for your use case. In this example scenario, we will be saving it as “sample-daemon.yaml”.

Let's deploy this to the Kubernetes cluster. Write the following command.

kubectl apply -f sample-daemonset.yaml

The above kubectl command will append some kube-system related information to your manifest. This will make it easier for the master components to manage.

Running the above command will give an output confirming that the Kubernetes DaemonSet has been created.

To verify if it has properly run or not, run the following command.

kubectl get daemonsets -n <namespace>

This command will list all the DaemonSet deployed in a specific namespace.

The output of the above command usually contains all the details related to the Kubernetes namespace, node selectors, desired versus current replicas, the number of pods that are ready (or not ready), how many are up to date, etc.

You can also get all the DaemonSets in different namespaces in one go by appending the “--all-namespaces” flag to the same kubectl get command.

Also Read: The Only Guide to Kubectl Commands You'll Ever Need

How to Check Pods Created by Kubernetes DaemonSet?

To check the pods that the DaemonSet has created, write the following command.

kubectl get pods -l app=sample-daemon

This will list all pods which have the label “app=sample-daemon” which we used in the daemonset configuration file.

How to Scale Pod Instances in Kubernetes DaemonSet?

Now that the DaemonSet is in place, you might want to scale the number of pod instances.

Let's say you want to increase the replica count of a DaemonSet to 2, it will ensure to increase the number of pods of an application on each node to two.

The following command performs this requirement.

kubectl scale daemonset sample-daemonset --replicas=5

In order to increase the number of instances on a node level, every time you create a node on the cluster with the same label, a new instance gets created.

Similarly, if you want to scale down, use the same “kubectl scale daemonset” command, and in place of the replica value, give a lower value.

Basically, the same command can satisfy two operations, depending on the value you pass in "replicas".

How to Restart DaemonSets in Kubernetes?

Let's say you want to restart DaemonSets to refresh the deployed pods, we have a command just for that. As such, you don't have to delete the K8s DaemonSet and recreate it.

You just deployed a DaemonSet based on a log collection called Datadog. In this particular case, you may want to refresh this log collection pod on every node.

Just one command can do the trick for you.

kubectl rollout restart daemonset datadog -n default

To check the status of this rollout initiated, run the following command.

kubectl rollout status ds/datadog -n default

How to Stop DaemonSets in Kubernetes?

If you want to stop a DaemonSet, you delete it using the following command.

kubectl delete daemonset <daemonset-name>

Let's consider from our previous example that we would like to delete the daemonset “sample-daemon” that we had created.

The command to delete that DaemonSet will look like this.

kubectl delete daemonset sample-daemon

Use Cases of DaemonSets in Kubernetes

We can use Kubernetes DaemonSets for a variety of use cases. Let's look at the 3 most-common use cases of DaemonSets.

1. Application Monitoring

Kubernetes DaemonSets can fetch logs to understand the behavior of applications in every node of a cluster

You can use K8s DaemonSets to ensure that the logs are captured on each node and further analysis is made for better infrastructure/ application monitoring & maintainability.

Not just logs, but you can use DaemonSets to fetch metrics from monitoring agents such as Prometheus, Datadog, etc.

Also Read: How to Use Prometheus Operator in Kuberntes?

2. Networking and Security

DaemonSets in Kubernetes enable network-related service deployments on each node.

For example, you can use Kubernetes DaemonSets to deploy services like load balancers which can help you increase the network-level performance of your application.

DaemonSets can enforce security policies by deploying security agents or scanning tools on every node.

This ensures real-time monitoring for vulnerabilities, intrusions, or compliance violations across the cluster.

Also Read: Differences between Consul, Istio, & Linkerd

3. Edge Computing

Kubernetes clusters are built on distributed edge nodes and DaemonSets can be used here for the deployment of edge-specific applications.

Also Read: Differences between K8s Cluster Autoscaler and AWS Karpenter

Kubernetes DaemonSets with Pods & Nodes

DaemonSets in Kubernetes make application management easier.

Let us look at a Kubernetes DaemonSet example to explain this better.

Consider a cluster configured to have three nodes. Let's call them node1, node2, and node3 for simplicity and you want to run two applications AppA and AppB.

You want these applications to be physically isolated in such a way that node1 and node2 allocate AppA and node3 allocates AppB.

You want to enable logging activity to AppA on a regular basis. In such a situation, you will make use of a DaemonSet in Kubernetes.

But, when you do this, AppA will get created in node3 as well and we certainly don't want that.

This is where the concept of taints and tolerations comes into the picture.

Taints & tolerations are a concept in Kubernetes similar to nodeSelectors. However, the difference between the two of them is that taints are more restrictive in nature than node selectors.

Taint is a label you apply to a node in Kubernetes. Toleration is the reference of a node label you apply in the deployment manifest file.

This means that if you want the AppA to be assigned to node1 and node2 only, you need to apply taints on node1 and node2 and toleration to AppA daemonset manifest file.

Let's look at this with an example depicting the same.

Labeling the desired nodes. This is a key value pair that you provide to a node to physically classify the different nodes to different applications.

kubectl label nodes node01 node02 special=true

Now apply the taint denoting that no pod without the toleration specification to the node will be allowed to enter the node.

This is how to apply a taint on a node.

kubectl taint nodes node01 node02 special=true:NoSchedule

Now we apply a manifest file that will deploy the daemonset with toleration to node01 and node02 only.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: my-daemonset
spec:
  selector:
    matchLabels:
      app: my-daemon
  template:
    metadata:
      labels:
        app: my-daemon
    spec:
      tolerations:
      - key: special
        operator: "Exists"
        effect: "NoSchedule"
      containers:
      - name: my-daemon-container
        image: my-docker-image:tag
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "200m"
            memory: "256Mi"

Lastly, just apply this file and you will see that it will get allocated only to node01 and node02 no matter how many pods you scale it to.

Kubernetes DaemonSet Best Practices

Let's look at the top 4 best practices to follow for Kubernetes DaemonSets.

1. Proper Resource Allocation

Allocate a good amount of resources like CPU/memory to your DaemonSet according to your application.

This ensures optimal performance and avoids resource contention on the nodes.

2. Taints and Tolerations

Use taints and tolerations to enforce specific requirements or constraints on the nodes where DaemonSet pods can be scheduled.

This provides control over pod placement and helps achieve better node isolation and resource allocation.

3. Rolling Updates

Use rolling updates to ensure a smooth transition. This helps maintain fault tolerance and availability.

Use strategies like max surge which provides a percentage of instances that has to be available while the rest of the instances are undergoing rolling updates.

4. Monitoring and Logging

Implement monitoring and logging solutions to collect metrics, monitor the health, capture logs, and analysis.

This helps ensure stability and performance throughout.

Also Read: Differences between SPLUNK and ELK Stack

5. Security Considerations

Use secure container images, and security contexts, and enforce network policies.

6. Regular Maintenance and Updates

Keep your DaemonSets up to date with the latest patches and regularly review and update the container images and resource allocations.

Also Read: Kubectl Config Context Tutorial

Kubernetes DaemonSet Alternatives

Let's look at the top 4 Kubernetes DaemonSet Alternatives you can use.

1. Deployment with Node Selector

You can use a Deployment resource with a node selector to schedule pods on specific nodes.

This creates a physical and logical grouping of your applications in Cluster.

2. Kubernetes StatefulSet

If you require stable, data restoring operation to your application, you can use a StatefulSet in Kubernetes.

This is applicable for applications that use databases and has a connection with the FE and BE.

3. Job or CronJob

As we know, DaemonSets perform scheduled operations on a regular basis. The same functionality is available in yet another component of Job/CronJob.

4. Custom Operators

You can build custom Kubernetes operators using frameworks like the Operator Framework.

It provides the flexibility to manage your own resources configured according to your specific use case.

Also Read: Top Docker Desktop & Docker Alternatives

Kubernetes Daemonset vs Deployment

Here is the given information formatted into a table:

ParameterDaemonSetDeployment
Use CasesEnsures that a specific pod runs on every node in the clusterFor managing stateless applications and creating scalable and self-healing replicas of pods
Scheduling/DistributionCan schedule one pod per node, ensuring that a pod is running on every node in the clusterCan distribute pods across multiple nodes based on the desired replica count
Pods & NodesEnsures that a fixed number of pods is running on each node, with the number of pods equal to the number of nodes in the clusterAllows scaling the number of replicas up or down to handle demand, and supports rolling updates
Node FailureAutomatically reschedules the pod on another available nodeAutomatically replaces it with a new replica on a healthy node

Share with friends

Priyansh Khodiyar's profile

Written by Priyansh Khodiyar

Priyansh is the founder of UnYAML and a software engineer with a passion for writing. He has good experience with writing and working around DevOps tools and technologies, APMs, Kubernetes APIs, etc and loves to share his knowledge with others.

Further Reading

Life is better with cookies 🍪

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt out if you wish. Cookie Policy