English
Note

In this Kubernetes guide, we will talk about everything about Kubernetes logs and the kubectl logs command - how they work and how to get logs for containers, nodes, pods, etc. with examples.

Mastering Kubernetes Log Management using Kubectl Logs - Best Practices and Examples cover image

Kubernetes provides a robust set of tools for managing containers, pods, and applications. Among these tools, kubectl is the command-line utility that serves as a primary interface for interacting with a Kubernetes cluster.

One essential aspect of managing Kubernetes applications is handling logs effectively.

In this guide, we'll explore the kubectl get logs command, its variations, and practical examples to help you master Kubernetes log management.

Basics of Kubectl Logs

The kubectl logs command is your gateway to accessing container logs within pods running in a Kubernetes cluster.

It allows you to retrieve logs from containers, making it useful for debugging, monitoring, and troubleshooting.

Here's the basic syntax:

kubectl logs <pod-name>

Replace <pod-name> with the name of the pod for which you want to fetch logs. By default, this command retrieves the most recent logs from the specified pod's primary container.

Also Read: How to Setup & Use Prometheus Operator in Kubernetes?

How Does Kubectl Tail Logs Work?

The kubectl logs command, with or without the -f flag, leverages Kubernetes' API to access container logs. Here's a breakdown of how it works:

#1. When you run kubectl logs, it authenticates to the Kubernetes cluster using the credentials configured in your kubeconfig file in your system where kubectl is installed and used.

Kubernetes' role-based access control (RBAC) ensures that you have the necessary permissions to access pod logs.

#2. kubectl logs identifies the target pod by the name you provide as an argument.

#3. When you specify a container using the -c flag, kubectl communicates with the Kubernetes API to fetch logs from the specified container within the pod.

#4. The Kubernetes API server communicates with the Kubelet running on the node where the pod is scheduled. The Kubelet, in turn, accesses the container runtime (Crio-D, Containerd, Docker, etc.) to fetch the logs.

#5. If you use the -f or --follow flag, kubectl establishes a WebSocket connection to the Kubelet. This connection allows real-time streaming of logs.

New log entries are continuously fetched and displayed on your terminal as they are written by the container.

#6. kubectl formats and displays the log entries on your terminal. Logs are typically displayed with timestamps and other metadata to help you understand when events occurred.

In essence, kubectl logs acts as a bridge between your local terminal and the Kubernetes cluster, facilitating the streaming of logs in a user-friendly manner.

Kubectl Get Logs from Pod - How to Check Kubectl Pod Logs?

When managing a Kubernetes cluster, checking pod logs is a common task for debugging and monitoring purposes.

The kubectl logs command, as previously discussed, is your primary tool for this. In this section, we'll focus on how to effectively check pod logs using kubectl.

Retrieving Pod Logs

To retrieve logs from a specific pod, you simply use the kubectl logs command followed by the name of the pod:

kubectl logs <pod-name>

Replace <pod-name> with the name of the pod for which you want to fetch logs.

This command provides you with the most recent log entries generated by the primary container within the specified pod.

Suppose you have a pod named my-app-pod. To retrieve logs from this pod, you would execute:

kubectl logs my-app-pod

This command displays the most recent log entries produced by the primary container within the my-app-pod pod.

Also Read: How to Use Kubernetes Secrets?

Fetching Logs from Specific Containers within a Pod

In Kubernetes, a pod can host multiple containers.

When you want to retrieve logs from a specific container within a pod, you can use the -c or --container flag with kubectl logs.

Here's how it works:

kubectl logs <pod-name> -c <container-name>

Replace <container-name> with the name of the container from which you want to fetch logs.

This is particularly useful when dealing with pods hosting multiple containers, each responsible for different aspects of your application.

Imagine your pod, my-app-pod, has two containers: app-container and sidecar-container. To fetch logs from the sidecar-container, you'd run:

kubectl logs my-app-pod -c sidecar-container

This command fetches and displays the logs generated by the sidecar-container within the my-app-pod pod.

Also Read: A Complete Guide to Kubectl Commands

Fetching Earlier Logs

If you need to access logs from a specific point in time or review historical log entries, you can use the -n or --tail flag to fetch a specific number of lines from the end of the log stream:

kubectl logs <pod-name> -n <line-count>

Replace <line-count> with the number of lines you want to retrieve from the end of the log stream.

Let's say you want to retrieve the last 50 log lines from the my-app-pod.

You would execute:

kubectl logs my-app-pod -n 50

This command provides the most recent 50 log entries from the specified pod.

Also Read: What are Kubernetes CRDs?

Streaming Logs in Real-Time

By default, kubectl logs retrieves and displays logs as a one-time operation.

However, you can use the -f or --follow flag to stream logs in real-time, similar to the tail -f command on a local system:

kubectl logs -f <pod-name>

This command continuously displays new log entries as they are generated by the container. It is particularly useful for monitoring applications and capturing live debugging information.

To stream logs in real-time from the my-app-pod, you would execute:

kubectl logs -f <pod-name>

This command keeps the connection open and allows you to view new log entries as they are written to the log stream.

Kubernetes Logging Best Practices

There are a lot of best practices for Kubernetes; here are some for logging.

Structure Your Logs Right

Structure your log messages using a standard format, like JSON. Structured logs are easier to analyze with log aggregation tools making it simpler to extract valuable information from your logs.

NEVER Log Sensitive Data

Never log sensitive information like passwords. Instead, rely on environment variables or secret management tools like key vaults to handle sensitive data.

Centralized Logging System

Send logs coming from all pods and containers to a centralized logging system. Popular choices include EFK, LFG, or managed services like AWS CloudWatch, Google Cloud Logging, or Azure Monitor.

Use Log Rotation Policies

Configure log rotation policies to manage log file sizes. This prevents logs from consuming all disk space in the node and gives some room for the important data.

Also Read: Top Monitoring Tools for Microservices

Availability of Resources

Ensure that your pods have a good amount of resources. This can be ensured by allocating a finite amount of requests and limits to each pod going into the cluster.

Without resource constraints, it can impact the performance of your cluster.

Metadata is MUST

Include metadata in your log entries, such as timestamps, pod and container names, and namespace information. This contextual information is invaluable when diagnosing issues.

Include App-specific Metrics

Log not only errors but also important application-specific metrics and events. This data can help identify performance bottlenecks and track user behavior.

Also, define a log retention policy based on your organization's requirements and compliance regulations. Automatically delete or archive logs that are no longer needed.

Monitor Your Logs

Regularly monitor the health and performance of your logging infrastructure. This includes the logging components themselves, such as Fluentd, Fluent Bit, or Logstash, as well as the storage backend.

Get Your Alerting Right

Set up alerts based on log patterns and anomalies. Regularly test these alerts to ensure they trigger when needed.

For example, whenever a pod is requesting more pods than expected or is going to the pending state frequently, you should be notified early on so that you can resolve the issue before the Production is completely impacted.

By following these best practices, you can effectively manage and utilize logs in a Kubernetes environment, making it easier to diagnose issues, monitor application health, and ensure the security of your cluster.

Also Read: How to Keep Docker Container Running?

Kubectl Container Logs

In Kubernetes, a pod can host one or more containers, each responsible for specific tasks within an application. To effectively manage and troubleshoot these containers, you need to access their logs individually.

This is where the kubectl logs command shines.

Accessing Container Logs

To access logs from a specific container within a pod, you can use the kubectl logs command with the -c or --container flag followed by the container's name:

kubectl logs <pod-name> -c <container-name>

Tailoring Container Log Output

kubectl logs provides several options to tailor log output, making it more convenient for troubleshooting and monitoring purposes.

You can fetch earlier logs, stream logs in real time, or retrieve logs from previous containers if a pod has been restarted.

Also Read: When to Use Kubectl Rollout Restart?

Fetching Earlier Container Logs

If you need to review historical log entries or access logs from a specific point in time, you can use the -n or --tail flag with kubectl logs:

kubectl logs <pod-name> -c <container-name> -n <line-count>

Streaming Container Logs in Real-Time

For real-time monitoring and debugging, you can use the -f or --follow flag with kubectl logs to stream logs from a specific container in real-time:

kubectl logs -f <pod-name> -c <container-name>

This command continuously displays new log entries as they are generated by the container, allowing you to monitor the container's behavior live.

To stream logs in real-time from the sidecar-container within the my-app-pod, you would execute:

kubectl logs -f my-app-pod -c sidecar-container

This command keeps the connection open and provides you with live updates of log entries from the specified container.

Fetching Logs from Previous Containers (Pod Restart)

In cases where a pod has been restarted, you can still access logs from the previous containers within the pod by using the -p or --previous flag with kubectl logs:

kubectl logs -p <pod-name> -c <container-name>

Suppose your my-app-pod was restarted, and you want to access logs from the previous sidecar-container. You would run:

kubectl logs -p my-app-pod -c container1

This command retrieves logs from the sidecar-container that was running before the pod restart.

Also Read: Kubernetes Pod vs. Node vs. Cluster

Kubectl Node Logs

In Kubernetes, understanding what's happening at the node level is crucial for maintaining a healthy cluster.

Node logs provide insights into node-specific activities, including resource allocation, system events, and container runtime behavior.

To access node logs, you can use _kubectl. A_lthough, it has some limitations compared to more specialized node-level monitoring tools.

Also Read: Guide to Kubernetes Liveness Probes

Accessing Node Logs

To access logs from a Kubernetes node using kubectl, you can leverage the kubectl logs command with the node name as the target.

kubectl logs <node-name>

This command fetches logs from the default container, typically the kubelet process, which manages pods on the node.

Suppose you have a node named my-node. To access logs from this node, you would execute:

kubectl logs my-node

Keep in mind that this command retrieves logs from the default container (usually kubelet) running on the node.

It provides insights into node-level activities but may not include detailed information about individual pods or containers.

Limitations of Node-Level Logs with Kubectl

While kubectl can provide basic insights into node-level activities, it has some limitations when compared to dedicated node-level monitoring solutions.

Here are a few limitations to consider:

#1. Limited Container Insights: kubectl logs from a node primarily retrieves logs from the kubelet process, which manages pods on the node. It may not provide detailed logs for individual containers within pods.

#2. Lack of Context: Node-level logs lack the context of pod and container details. You won't see which pods or containers generated specific log entries.

#3. No Aggregation: kubectl does not aggregate logs from multiple nodes. You'll need to run the command on each node individually to access logs, which can be impractical in larger clusters.

Kubectl Deployment Logs

In Kubernetes, deployments are a common resource for managing the rollout and scaling of containerized applications.

Monitoring deployment logs is crucial for ensuring that updates are successful, tracking changes, and diagnosing issues.

In this section, we'll explore how to access deployment logs using kubectl.

The commands to get the deployment logs are:

kubectl logs deployment/<name-of-deployment>

Suppose a deployment blue-app is deployed. To access its logs you will use:

kubectl logs deployment/blue-app

To get the real-time logs of this deployment, use -f command like in other commands:

kubectl logs -f deployment/<name-of-deployment>

Also Read: How to Use Kubectl Delete Deployment?

Frequently Asked Questions

1. How do I check my Kubelet log?

To check the kubelet logs in a Kubernetes node, you can typically find them in the /var/log/kubelet.log file. You can view these logs using standard text editors or commands like cat, less, or tail for troubleshooting and monitoring node-level activities and container-related events.

2. How do you get logs of all pods in Kubernetes?

You can retrieve logs from all pods in Kubernetes by using a label selector with kubectl logs. For example, to fetch logs from all pods labeled with "app=my-app", you would run kubectl logs -l app=my-app. This command would aggregate and display logs from all pods matching the specified label selector.

Priyansh Khodiyar's profile

Written by Priyansh Khodiyar

Priyansh is the founder of UnYAML and a software engineer with a passion for writing. He has good experience with writing and working around DevOps tools and technologies, APMs, Kubernetes APIs, etc and loves to share his knowledge with others.