Share with friends
In this tutorial, we will walk you through the steps on how to fix CreateContainerError & CreateContainerConfigError in Kubernetes.
Kubernetes is a powerful container orchestration system, but it can be tricky to troubleshoot when things go wrong. Two common errors that users encounter are CreateContainerError and CreateContainerConfigError.
These two errors generally occur when the container state is transitioning from pending to running state and can be caused by a variety of factors, such as incorrect Kubernetes image pull settings, network problems, or resource constraints.
This tutorial will walk you through the steps on how to fix CreateContainerError & CreateContainerConfigError in Kubernetes.
We will start by discussing the causes of these errors, and then we will provide some troubleshooting tips and how to fix these errors using the Kubernetes CLI.
Before we begin the debugging & troubleshooting guide, let's quickly look at what is CreateContainerrror & CreateContainerConfigError and when they occur.
What is CreateContainerError?
CreateContainerError is a runtime error. It occurs when the Kubernetes master is unable to create or start a container.
This error generally occurs when it is unable to pull the image provided in the configuration, due to certain resource constraints such as insufficient amount of CPU/Memory.
This usually means that it is unable to communicate with a resource that is dependent on the application due to some networking issues or even due to some master component-related runtime issue.
When this error occurs, the following are the type of event messages that you will observe: "Failed to pull the image", "Failed to create container", and "command not found".
Also Read: Most Used Container Runtimes
What is CreateContainerError?
CreateContainerConfigerror is the error that occurs when the configuration file (generally written in YAML) has some issues with the values in the config parameters.
This error basically means that there's some invalid syntax in the file. When this error occurs, you need to check the manifest file's syntax according to the apiVersion and the kind specified in the file.
Examples of CreateContainerConfigError messages could be "Invalid image name," "Invalid resource requests," or "Container command is missing."
What Causes CreateContainerConfigError in Kubernetes?
CreateContainerConfigError occurs when the container starts running, it validates the config file, and if, during this validation phase, some of the configuration is missing.
For example, you mentioned a configmap in your deployment manifest but that config map is not present in Kubernetes, this will change the state of the container state from "running" to "createcontainerconfigerror".
Whenever a configuration file is pushed into Kubernetes, there is a specific method that gets triggered called generateContainerConfig.
This is used to retrieve the following information:
- Takes the command and arguments provided
- Persistent volumes if mentioned in the file
- Configmaps attached
- Relevant secrets for the container
When any of the above information is tampered with, you will get CreateContainerConfigError.
Let's consider all the possible scenarios where we can receive this error.
Scenario 1: A command is invalid in the config file.
You create your manifest file and apply the file and you encounter createcontainerconfigerror
.
How do you understand the actual error?
The answer is simply to use describe command on the pod,
kubectl describe pod <pod-name>
Replace the <pod-name>
with your pod name and if you carefully observe the events and status of the pod, you will understand that it is based on the command being invalid.
Let's say that you have used below script is your manifest.
apiVersion: v1
kind: Pod
metadata:
name: my-java-pod
spec:
containers:
- name: my-container
image: javaapp:latest
command: ["/java -war javaapp.war"]
Here you can see that the command is incorrect. To fix the CreateContainerConfigError correct that line with a correct command to make your application up and running.
Scenario 2: Environment variable defined with an incorrect syntax
Let's consider the following example:
apiVersion: v1
kind: Pod
metadata:
name: invalid-env-var-pod
spec:
containers:
- name: my-container
image: nginx:latest
env:
- name: MY_ENV_VAR
value: "invalid-env-var"
Clearly, the value section is not declared properly with the environment variables as there are no closing double quotes to it.
This will result in the CreateContainerConfigError error as well.
So, to resolve this, you'll need to describe the pod as in the previous scenario, correct the syntax, and apply the file again using kubectl apply.
Also Read: Everything You Need to Know to Work with Kubernetes Secrets
Scenario 3: Unsupported arguments or parameters in the manifest.
There is a standard syntax for declaring your pod definition file with specific keywords.
Only when you follow this syntax, you will be able to create it in the cluster successfully.
But if you provide your own keywords in the YAML files, it will throw you this CreateContainerConfigError error in Kubernetes.
You can easily identify this issue in the events part of the pod description. To mitigate this issue, go to the file and correct your syntax as per the standard.
Let's look at an example.
apiVersion: v1
kind: Pod
metadata:
name: unsupported-parameter-pod
spec:
containers:
- name: my-container
image: nginx:latest
unsupportedParameter: true
There is no such parameter as unsupportedParameter. So, this will result in the CreateContainerConfigError error. The remediation here is to remove that line and reapply the file.
Scenario 4: Invalid number of requests/limits
Let's say the node on which you want to host your application in the cluster only has 1 CPU.
Now, consider the below pod manifest.
apiVersion: v1
kind: Pod
metadata:
name: resource-limit-error-pod
spec:
containers:
- name: my-container
image: nginx:latest
resources:
requests:
cpu: "2"
The CPU assigned to the cluster node is 1 but if you mention the requests to be 2, there would be no sufficient available nodes with the count 2.
So, this will result in the CreateContainerConfigError error.
The fix here is to check the total amount of resources in your nodes and then change your file to request the application appropriately.
Also Read: 5 Ways to Use Kubectl Rollout Restart
What Causes CreateContainerError?
Kubernetes throws a CreateContainerError when there's a problem in the creation of the container, but unrelated to the configuration.
For instance, a referenced volume not being accessible, or a container name already being used.
CreateContainerError occurs when there is a config map that is present as per the specification in the manifest but some values in the config map are incorrectly configured.
So, this error will also occur when you have specified your image incorrectly, insufficient resources, etc.
Let's closely look at the CreateContainerError error with some example scenarios.
Scenario 1: Image/tag is not present.
This is one of the most common reasons for this error. When your image and its associated tag specified in the file is not present, it will cause the CreateContainerError error.
Let's look at the following file:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: my-container
image: nginx:12e
If you look this image up on Docker Hub, you will find that the image nginx with the tag “12e” isn't available.
To resolve this issue, make sure that the tags are available in Docker Hub, consider their vulnerabilities, and then choose the tags accordingly.
Once you've done that, update the file and apply it to the Kubernetes cluster.
Also Read: How to Manage Kubernetes Cluster using Kubeadm?
Scenario 2: Container settings ambiguity
This is the scenario you will least expect or give attention to.
The createcontainererror can also occur if, in the Dockerfile, you mentioned CMD and ENTRYPOINT together incorrectly. This will basically create ambiguity in the container settings.
So, either use CMD or ENTRYPOINT and run the container locally once before deploying it to the cluster.
Container Creation Process in Kubernetes - An Overview
The concept of containers started because of the advantages they hold over monolithic architecture. Containerized applications are created by packaging only the essential libraries that an application requires to run correctly.
According to different runtime stacks, the containerization process differs.
For example, if you have worked on a Maven application, you would know that we use a file called pom.xml to get all the packages necessary for the application and put it into a single “JAR” or “WAR” file.
Using this JAR
file with an appropriate command should be enough to deploy our application to production.
That's what the concept of containerization does.
It only takes the necessary libraries/packages, and containerizes it on a lighter OS which will consume comparatively lower memory, CPU, and other resource factors.
To demonstrate the containerization process for a Maven application, consider a pom.xml file that connects to all the packages needed for the application.
And now, run this pom.xml and create your JAR file.
Once it is created, we are all ready for the containerization process.
Create a Dockerfile at the root of your project where you have pom.xml and your target folder and add the following instructions to it.
-
Decide on a base image: This step is all about the kind of container base which will act as an OS for your application. Consider a lighter OS to take the best advantage of containerization.
-
Now provide your instructions: In our example, we will be copying our JAR file from the targets folder onto the base image chosen.
-
Once done, the last step is to provide the command so that once the container starts running, this command you provide in the Dockerfile gets executed.
An example Dockerfile is shown below:
FROM openjdk:17
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} yourapp.jar
ENTRYPOINT ["java","-jar","/yourapp.jar"]
This is your container and now you can use this to run your application anywhere and store it on any registries.
Also Read: How to Build Docker Image?
Common Debugging Methods of Container Errors in Kubernetes
Let's look at some debugging methods that can come in handy while working with containers.
1. Check the Events of K8s Pods
This will display all the events that occurred and show all their corresponding messages.
The command to get the event details of a K8s Pod is:
kubectl describe pod <pod-name>
Also Read: Everything You Need to Know to Work with Kubernetes Namespaces
2. Check Kubelet Logs
You won't be able to check the application's (pod) logs as they won't be in a running state.
So, you need to check the kubelet logs, that are responsible for taking care of pods. You will get additional information on it using the command:
journalctl -u kubelet
3. Check the Manifest File
Container configuration errors occur when something goes wrong with the configuration item information. So, check if the commands, arguments, and images are mentioned correctly.
Also Read: A Complete List of Kubernetes Commands
4. Check the Availability of Resources
Check if there are enough resources in the cluster environment and if you've allocated a good amount of resources.
5. Check RBAC Rules
When cluster components won't be able to access each other, errors related to networking or access can occur.
So, ensure that all components in the same network/networks are connected and all access controls are configured correctly.
6. Use Debugging Tools
You can modify your files' command section to use debugging tools like** sleep **to keep the container running while you log in to the container and find out what is present within. This allows for easier debugging.
Share with friends