Docker is one of the most widely known and used containerization platforms today. Docker enables software developers to build, run, update, and ship their applications with containers.
For years, developers have experienced problems packaging their applications (ie a rigid infrastructure) with services that are difficult to scale and inefficient. In addition, it's been hard to run their software efficiently on different platforms.
Docker offers a plethora of solutions to these problems. Every software developer or engineer looking to join a company that runs containerized workloads with Docker as part of the build stack needs to be familiar with containerization and Docker.
In the interview process, you'll likely be asked questions like "What is a Docker container's lifecycle?" and "What are Docker object labels?".
Being able to answer these questions intelligently and articulately can be the reason for getting offered a job or not.
This article discusses some of the most popular Docker interview questions and answers. These questions cover topics that every developer and engineer looking to manage or orchestrate containerized workloads should know.
The following are some of the most commonly asked Docker interview questions:
A container's lifecycle is the series of changes a container goes through throughout its whole operation phase and includes five main phases (depending on the use case, some of these phases can be skipped):
docker create --name <container-name> <docker-image-name>
docker run
command. Or a container can be created and automatically started from a preexisting Docker image stored locally or in the Docker registry.Starting a Docker container from an image available locally or remotely pulled from the Docker registry can be done with the following command:
docker run <container-name>
# OR
docker run <container-id>
# OR
docker run <repository-name/container-name:tag>
# Pause
docker pause <container-name>
# OR
docker pause <container-id>
# Unpause
docker unpause <container-name>
# OR
docker unpause <container-id>
docker stop <container-name>
# OR
docker stop <container-id>
docker rm <container-name or container-id>
# Delete forcefully
docker rm -f <container-name or container-id>
For more information on the different Docker commands used during each phase, you can review the Docker Reference documentation on Command-line interfaces (CLIs).
Virtualization is the isolation of a computer's physical hardware resources and simulates it as a separate computing unit. This separate computing unit is abstracted from the physical hardware resources and is done on the hardware level.
Similar to virtualization, containerization is the isolation of a computer's operating system (OS) and simulates it as a separate operating system. This separate operating system is abstracted from the host operating system where the container is being run and is done on the OS level.
Docker object labels are used to attach specific information (such as the type and version of Docker image used) to certain Docker objects, including images, containers, volumes, networks, local daemons, swarm nodes, and swarm services.
A Docker image is a template that contains the application and all the dependencies required to run that application on Docker. However, as previously mentioned, a Docker container is a logical entity. Or more simply, it's a running instance of the Docker image.
A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker. An image is comparable to a snapshot in virtual machine (VM) environments.
Docker Hub is a remote-based image repository that is used by Docker users to create, develop, store, and distribute Docker images with various individuals or teams.
The Docker architecture consists of three main components:
Docker uses a client-server architecture where the Docker client can perform various interactions with the Docker daemon contained in the Docker host. A Docker client can communicate with a Docker daemon running on the same machine as the client or another Docker daemon running on a remote Docker host.
A Dockerfile is a file containing instructions on how a Docker image should be built. With a Dockerfile, developers can specify the different code, dependencies, and tools to be attached and built into a Docker image. Docker images are read-only and can be considered as snapshots, as they contain specific code and dependencies for a specific time, making them consistent to work with.
You can find stored Docker volumes in the /var/lib/docker/volume
directory for Linux OS. Docker volumes are created for persisting data that can be easily accessed and managed, and it is generated by a running container.
Following are some of the most commonly used Docker commands:
docker --version
: renders all information regarding the version of Docker running on the host machine.docker ps
: lists all running Docker containers.docker ps -a
: lists all running and exited Docker containers.docker exec
: accesses the running container. This is commonly used in interactive mode with the following command:docker exec -it <container-name or container-id> bash
docker run
: creates Docker containers from preexisting Docker images. The common usage for this command looks like this:docker run -it -d <docker-image>
docker rm
: deletes stopped Docker containers and is commonly used like this:docker rm <container-name or container-id>
# OR
# Delete image forcefully
docker rm -f <container-name or container-id>
The docker create
command can be used to create Docker containers from a Docker image. With the docker create
command, the Docker container is created but not started. To start the created Docker containers, simply use the docker start
command:
# To create a docker image
docker create --name nginx_base -p 80:80 nginx:alpine
# Start created docker image
docker start nginx_base
You can list all running containers with the docker ps
command.
A Docker container can be started with the docker start <container-name or container-id>
command and can be stopped with the docker stop <container-name or container-id>
command. If you want to kill a container, you can use the docker kill <container-name or container-id>
command.
A Docker container can be restarted automatically depending on its use case. Docker provides a list of restart policies that can be attached to the desired container. Some of these restart policies are as follows:
on-failure
: This policy triggers a restart of a container if it exits due to a failure.unless-stopped
: With this policy in place, a container is not restarted when stopped or even when the docker daemon is restarted.always
: This policy triggers a container to restart after it has stopped.To attach a restart policy, you can perform any of the following commands, depending on your needs:
# Start a container in detached mode, and attach a policy
docker run --restart <insert-policy-here> <container-name or container-id>
# Change restart policy for an already running container
docker update --restart <insert-policy-here> <container-name or container-id>
# Change restart policy for all running containers
docker update --restart <insert-policy-here> $(docker ps -q)
While both the COPY
and ADD
commands perform similar purposes, the COPY
command is preferred for writing standard Dockerfiles that involve copying files locally to the Docker image. The COPY
command makes the Dockerfile more explicit, where only a copy operation is required.
In contrast, the ADD
command performs a similar operation to the COPY
command but with more functionalities. The ADD
command is used when you need to download files from remote URLs or auto-extract TAR files and include them in the destination directory of the Docker image.
According to the "Best practices for writing Dockerfiles", it's preferred to either use a curl
or wget
with the RUN
command to download the file and remove them when not needed. This is because using the ADD
command can lead to unexpected files being added to the file system in the Docker image. It's best to use the ADD
command only when absolutely necessary.
The COPY
and ADD
commands can be used as shown here:
COPY <source-directory> <destination-directory>
# <source> can be file, URL, or tarball.
ADD <source> <destination-directory>
# Using curl command with COPY to download a file
COPY . /data
RUN curl -SL https://example.com/big.tar.xz \
| tar -xJC /data \
Docker provides some internal management commands for monitoring workloads, including the [docker stats](https://docs.docker.com/engine/reference/commandline/stats/)
command, which displays a live stream of containers and their resource usages, and the [docker events](https://docs.docker.com/engine/reference/commandline/events/)
command, which displays real-time events from the server.
Docker also provides integration with the open source monitoring system Prometheus. With this integration, Docker is set up as a target on Prometheus, and Prometheus is able to scrape metrics from the Docker daemon at regular intervals for further parsing and analysis.
A "state" in Docker refers to the condition of a container at any point in time. A container can be in any of these four states: run, paused, exit, or restart. If an application in Docker doesn’t read or store information about its state from one runtime to the next, it's considered stateless.
A stateful application retains the memory of its state each time it runs. When you should run a stateful or stateless application is largely dependent on the application's use case. For instance, applications that require persisting session data for specific operations will run as stateful applications. Applications like user interfaces that only allow GET operations can run as stateless applications since they don't involve persisting or querying existing data.
The bridge network is the default network driver used in Docker. If a driver isn't specified, the user automatically creates a bridge network.
New containers connect to this network unless specified by the user. User-defined custom bridge networks can also be created for containers since they're superior to the default bridge network as they provide better isolation.
An ENTRYPOINT
is an instruction used to set executables that will run whenever the container is initiated. In a Dockerfile, there must only be one ENTRYPOINT
to prevent the build from failing due to multiple ENTRYPOINT
s.
On the other hand, the CMD
instruction includes the default program that will execute once the container is run.
The major difference between ENTRYPOINT
and CMD
is that the ENTRYPOINT
command cannot be overridden or ignored. When both are used, the CMD
is executed as part of the command written in ENTRYPOINT
. When an argument is used in the docker run
command or in the shell form, the contents of CMD
will be overwritten and the ENTRYPOINT
command will be executed.
In Docker, there are three types of mounts that can be used in Docker including volumes, bind mounts, and tmpfs mounts.
Volumes are the most common mounts available for Docker containers and are fully managed by the Docker engine. Volumes can be created using Docker commands and can be shared within the Docker containers. On Linux, a volume is stored within a directory (/var/lib/docker/volumes/
) and is isolated from the Docker host. Data isn’t lost in volumes when the container is not running.
Bind mounts are host machine file systems that are mounted on Docker containers where the absolute path of a file or directory from the host machine is referenced and mounted into a Docker container. Containers have to rely on the host machine to be able to access the specified directory structure.
Tmpfs mounts are temporary mounts used to hold data temporarily. Once the Docker container is stopped, the data held by the mounts is lost. These mounts are stored in the host system's memory, not in the file system.
The data created in a container is persisted inside of the container even if the container exits. The only exception to this is if the container is deleted, the data will be deleted as well.
You can backup your data from the container to prevent it from being deleted by creating container commits, and saving the commits which are now the backups, as tar files.
In this article, you learned about twenty commonly asked Docker interview questions. Before your interview, you should test out some of these commands and familiarize yourself with them. This will make your Docker knowledge more robust and in-depth.
If you're looking to take your knowledge of Docker to the next level, check out Exponent. Exponent is a learning platform that helps you prepare for tech interviews in product management, engineering, and more. Exponent can help you reach the next level of knowledge that you will need to land your dream job.
Exponent is the fastest-growing tech interview prep platform. Get free interview guides, insider tips, and courses.
Create your free account