In this tutorial on how to build a Docker image, we will talk about everything from Docker installation, building your first Docker image, what is Dockerfile, the Docker build command, and so much more.
Excited about Docker and containerization? You're totally in the right place!
In this easy-to-follow guide, we'll take you on a journey from zero to your first Docker image. Let's get started, shall we?!
Docker Installation
Before we dive into this, let's make sure you've got Docker installed on your system, okay?
Depending on the type of OS you have, go to this link and install your version of the Docker engine.
Docker allows you to create, deploy, and run containers with ease. After you have installed Docker, check if Docker is installed successfully by running:
docker --version
If at any moment you feel stuck, run docker --help to get more information about all the types of commands and their usages.
Revisiting Docker
Docker, in a nutshell, is a free and open-source platform that lets you create, deploy, and run apps within small, sandboxed environments called containers.
Also Read: Docker Commands Cheat Sheet
Revisiting Containers
Containers are like little virtual machines, but they're more efficient and use the same operating system kernel as the computer they're running on, which makes them faster and more lightweight..
Here's a quick comparison between containerized applications vs virtual machines applications.
Now, let's break it down with an example to make it crystal clear. Imagine you're building a web application using Node.js, and it requires specific dependencies, libraries, and configurations to run correctly.
Traditionally, you would install these dependencies directly on your development machine locally, and that could lead to conflicts with other projects or the system itself when you want to run the same application in your friend's system or on any other system.
With Docker, you can wrap up your Node.js app, all its dependencies, and settings into one neat and tidy container.
This container sum ups everything your app needs to run smoothly, independent of the underlying environment.
The beauty of Docker is that you can share this image with your team or deploy it to any other environment that has Docker installed, and it will run consistently without any dependency issues.
If Docker is not installed, or you run into some errors, follow this official troubleshooting guide.
Building Your First Docker Image
Alright, let's jump right in! We'll start by creating a simple Docker image that runs a basic web application using Nginx.
To get started, create a new folder for your project. Then, go into that folder.
mkdir my-docker-project
cd my-docker-project
Also Read: Docker Image vs Containers
What is Dockerfile and How to Write One?
Basically, a Dockerfile is like a blueprint written in a text file that's used in Docker. It's like a recipe that tells Docker how to build a Docker container. Think of a Docker container as a handy, self-contained package that has everything it needs to run an application.
The Dockerfile contains:
-
A bunch of steps on how to build the container image.
-
Instructions correspond to a particular action, like installing software, copying files, setting environment variables, and more.
-
Then, Docker uses these instructions to create layers within the container image, which helps to optimize storage and speed up the build process.
Alright, so grab your favorite text editor and whip up a new file called (no file extension needed) in the same folder as your file.
The Dockerfile has a simple, easy-to-understand syntax. Each instruction represents a layer in the image.
Here's the basic structure of a Dockerfile:
# Use an existing base image
# FROM base_image:tag
# Use a Node.js base image
FROM node:14
# Set the working directory inside the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the container
COPY package*.json ./
# Install app dependencies
# If you are building your code for production
# RUN npm ci --only=production
RUN npm install
# Copy files from the host to the container's working directory
# COPY /path/to/source /app or simple copy everything using COPY ..
COPY . .
# Expose a port to allow external access OR the port your app listens on
# EXPOSE port_number
EXPOSE 80
# Define the command to run the application
# CMD ["command", "to", "start", "your", "app"]
CMD ["node", "app.js"]
Use .dockerignore file to avoid node_modules
from being copied into your container.
You can use the sample node app and the Dockerfile here.
Also Read: Top 13 Docker Desktop & Docker Alternatives
Docker Build Command
Alright, now that we've got our Dockerfile set up, it's time to make the Docker image using the "docker build" command.
This command will go through all the steps in the Dockerfile and create the final image.
Open up your terminal or command prompt, go to the folder where the Dockerfile is, and type this command:
docker build -t my-docker-image .
If this looks complicated to you, let's break down the command:
-
docker build: Initiates the build process.
-
-t my-docker-image: Tags the image with the name "my-docker-image" (you can choose any name you prefer).
-
. : Tells Docker where to find the Dockerfile and any other files it needs to build your image. This is usually the current directory where you're running the command.
The build process will start, and you'll see each step being executed in the terminal. Once it's complete, you should have your Docker image ready!
To verify that your image has been successfully created, you can list all the Docker images on your system: docker images
You can also do a docker ps -a
The above 2 commands are most handy and thus you need to know how they are different from each other. So let's compare docker ps -a
and docker images
.
1. docker ps -a
If you want to see all the containers on your system, whether they're running or chilling, use this command. The "-a" flag is like a secret code that tells the computer to show you everything, even the containers that are taking a break.
The columns in the output represent:
-
CONTAINER ID: The unique identifier for each container.
-
IMAGE: The image used to create the container.
-
COMMAND: The command that the container runs when started.
-
CREATED: The time when the container was created.
-
STATUS: The current state of the container (e.g., running, exited).
-
PORTS: The exposed ports on the container.
-
NAMES: The human-readable name given to the container.
2. docker images
So, you wanna know what Docker images you have on your system, huh? Well, this command will show you a list of all of 'em. These images are like the blueprints for creating containers. When you start a container, it's made from one of these images.
The columns in the output represent:
-
REPOSITORY: The name of the image's repository.
-
TAG: The specific version or tag of the image.
-
IMAGE ID: The unique identifier for the image.
-
CREATED: The date the image was created.
-
SIZE: The size of the image.
You should see "my-docker-image" listed among the images.
Next, let's run a container based on our newly built image:
docker run -d -p 8080:80 my-docker-image
In this docker run command:
-
-d
: Lets the container run in the background without you having to keep an eye on it. -
-p 8080:80
: Maps port 8080 on the host (your app) to port 80 on the container. -
my-docker-image
: The name of the Docker image to use.
So, now you can check out your Nginx web server. Just open a web browser and go to this address: .
When you get there, you'll see this message: "["Sunflower","Rose","Lily","Marigold","Orchids"]".
Well, congrats you just built an image of your node application, and it is running successfully in a containerized environment.
Optimize Your Docker Image
As you start building more complex applications, the size of your Docker images can quickly become a concern.
As shown above, a simple node app image is of size 883.82M. Bloated images not only consume more disk space but also take longer to transfer and deploy.
Here are a few tips to optimize your Docker image.
1. Use a Basic Base Image
Pick a basic image that only has the libraries your app needs to run.
For example, instead of using a generic Linux distribution image, opt for a specialized image like alpine
, which is known for its small size.
2. Multi-Stage Builds
If your application requires build tools, using multi-stage builds will help.
This involves using separate build and runtime images, where the build image includes all the tools needed to compile your code, and the final runtime image contains only the compiled artifacts.
This approach helps to keep the final image small and lean.
Also Read: Docker Swarm vs Kubernetes
3. Minimize Layers
It helps reduce the number of intermediate container layers created during the build process, leading to smaller and more efficient Docker images.
Each RUN command in a Dockerfile creates a new layer in the container image. Intermediate layers can contain temporary files and other artifacts, which increases the image size.
Here's an example of consolidating multiple RUN commands into one:
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get install -y curl
RUN curl -o app.tar.gz https://example.com/app.tar.gz
RUN tar -xzvf app.tar.gz
RUN rm app.tar.gz
After consolidation:
RUN apt-get update \
&& apt-get install -y python3 curl \
&& curl -o app.tar.gz https://example.com/app.tar.gz \
&& tar -xzvf app.tar.gz \
&& rm app.tar.gz
4. Utilize Docker-Slim
Docker-Slim is a tool that shrinks your image by launching a temporary container of your image and identifying the files that are actually used by your application in that container (through static + dynamic analysis).
Then, docker-slim creates a fresh single-layer image with just the files and directories that were actually used.
Docker-slim is therefore the most successful strategy of those mentioned here, but it is not without restrictions!
Also Read: Container Runtimes to Choose From
Host Your Image
Now that you have built your Docker image, you might want to share it with others or deploy it on different machines. There are several ways to host your Docker image.
1. Docker Hub Registry
Docker Hub is a public registry that allows you to store and share Docker images with the community. To push your image to Docker Hub, follow these steps:
- Log in to Docker Hub using the docker login command.
- Tag your image with your Docker Hub username and repository name:
docker tag my-docker-image your-docker-hub-username/my-repo-name:tag
- Push the image to Docker Hub, using this command:
docker push your-docker-hub-username/my-repo-name:tag
Now, your Docker image is available on Docker Hub for others to use and pull.
2. Local Registry Server
If you want to host your Docker images privately within your organization or on your local network, you can set up a local Docker registry server.
The registry server acts as a central repository for your images.
To create a local registry, you can use the official Docker Registry image:
docker run -d -p 5000:5000 --name local-registry registry:latest
After setting up the local registry, you can tag and push your image to it similarly to how you did for Docker Hub:
docker tag my-docker-image localhost:5000/my-repo-name:tag
docker push localhost:5000/my-repo-name:tag
3. Save/Load Images
Another way to share your Docker image is by saving it as a tarball and then loading it on another machine.
To save your image, run the following command:
docker save -o my-docker-image.tar my-docker-image
And to load the image on another machine:
docker load -i my-docker-image.tar
Also Read: Docker Cleanup Tutorial
Dockerfile Best Practices
Writing an efficient and secure Dockerfile is crucial for building reliable Docker images.
Here are some Dockerfile best practices to keep in mind.
1. Use Official Base Images
Prefer using official base images from Docker Hub or other reputable sources. These images are regularly maintained, and vulnerabilities are promptly patched.
2. Leverage Layer Caching
Docker uses caching to speed up the build process. Place frequently changing instructions (e.g., copying source code) at the end of the Dockerfile to take advantage of caching for stable layers.
Make sure this orientation change does not alter the working of Dockerfile itself.
3. Use .dockerignore
To make your Docker image smaller and faster to build, add a .dockerignore file to your project directory. This file tells Docker to skip copying certain files and folders into the image when it's being built..
4. Use Specific Tags
When pulling base images or dependencies, use specific version tags (e.g., nginx:1.19
instead of nginx:latest
) to ensure consistency and avoid potential compatibility issues.
5. Avoid Running as Root
Whenever possible, run your application as a non-root user inside the container to reduce security risks. Use the USER
instruction in your Dockerfile to specify a non-root user.
# Create a non-root user named "appuser"
RUN useradd -m appuser
……
# Change the ownership of the /app directory to "appuser"
RUN chown -R appuser:appuser /app
# Switch to the "appuser" for subsequent commands
USER appuser
Docker Build Troubleshooting - Some Common Errors & How to Fix Them
During the Docker build process, you might encounter errors that prevent the successful creation of your Docker image.
Let's cover some common issues you might face while executing Docker build and how to resolve them.
1. Cannot Find File Error
This error typically occurs when the COPY
or ADD
instruction references a file or directory that doesn't exist in the build context. Double-check the file paths and ensure they exist in the correct location.
2. Build Dependencies Missing
If your application requires specific build tools, make sure to include them in your Dockerfile, so they're available during the build process.
Remember to clean up unnecessary build tools after they have been used.
3. Permission Errors
Permission errors might occur if your application attempts to write to a directory that has restricted access within the container. Secure required permission.
4. Networking Issues
If your application requires internet access during the build process (e.g., fetching dependencies), ensure that your Docker daemon has access to the internet.
The Docker daemon is a crucial service that runs in the background on your host machine, facilitating the creation and management of containers. It is responsible for executing commands, managing images, and networking between containers.
Here are some key points to elaborate on the Docker daemon:
- Purpose: The primary purpose of the Docker daemon is to provide an interface between the Docker client and the underlying operating system. It listens for commands from the Docker client and executes them accordingly.
- Components: The Docker daemon comprises several components, each with a specific function:
- Container Runtime: This component is responsible for creating, starting, stopping, and removing containers. It interacts with the kernel's cgroups and namespaces to provide resource isolation and security for containers.
- Image Management: The daemon manages Docker images, including pulling images from registries, storing them locally, and creating new images.
- Networking: The Docker daemon sets up a virtual network for containers, allowing them to communicate with each other and with the outside world. It handles port mappings and DNS resolution.
- Security: The daemon implements various security features, such as user namespaces and SELinux, to ensure that containers are isolated from each other and the host machine.
- Communication: The Docker daemon communicates with the Docker client through a RESTful API. The client sends commands to the daemon using HTTP requests, and the daemon responds with the results.
- Importance: The Docker daemon is essential for running Docker containers. Without it, you wouldn't be able to create, manage, or interact with containers.
- Usage: To start the Docker daemon, you can use the following command: sudo service docker start Once the daemon is running, you can use the Docker client to interact with it.
- Troubleshooting: If you encounter any issues with the Docker daemon, you can check the daemon's logs for more information. The logs can be found in the directory.
Also Read: Kubernetes DaemonSet Tutorial
5. Incorrect Dockerfile Syntax
Review your Dockerfile carefully for any syntax errors or misplaced instructions. Double-check the order of instructions and ensure each line adheres to the correct syntax.
If you encounter an error, don't worry! Docker's error messages are usually helpful in diagnosing the issue. Troubleshoot the error, adjust your Dockerfile if necessary, and try building the image again.
Also Read: Differences between Docker and Podman
Conclusion
Congratulations! If you reached this section then brace yourself, you did learn a great deal about Docker.
You've now learned how to build a Docker image step-by-step, write an optimized Dockerfile, host your image on Docker Hub or a local registry, and troubleshoot common build errors.
Next steps? Go to Docker to see a host of things you can do. A good next thing to learn would be to write Docker compose file OR use nginx servers in Dockerfile. That's all for today! Keep learning.
Happy containerizing! 🐳