In today’s fast-paced software development world, consistency and efficiency in application deployment are crucial. Docker has revolutionized the way developers package, ship, and run applications. At the heart of Docker’s functionality lies the Dockerfile, a simple text file that automates the creation of Docker images.
In this blog, we will dive deep into Dockerfiles, understand their structure, commands, best practices, and advanced use cases.
What is Dockerfile?
A Dockerfile is a plain text file containing a set of instructions used to build a Docker image. Each instruction defines a step in creating the image, such as selecting a base image (FROM), copying files (COPY), installing packages (RUN), or setting environment variables (ENV).
When you execute the docker build command, the lines in your Dockerfile are processed sequentially to assemble your image.
The basic Dockerfile syntax looks as follows:
# Comment
INSTRUCTION argumentsInstructions are keywords that apply a specific action to your image, such as copying a file from your working directory or executing a command within the image.
By convention, instructions are usually written in uppercase. This is not a requirement of the Dockerfile parser, but it makes it easier to see which lines contain a new instruction. You can spread arguments across multiple lines with the backslash operator:
RUN apt-get install \
curl \
wgetLines starting with a # character are interpreted as comments. They’ll be ignored by the parser, so you can use them to document your Dockerfile.
Key Dockerfile instructions
Docker supports over 15 different Dockerfile instructions for adding content to your image and setting configuration parameters. Each plays a distinct role in shaping the container’s behavior, layering, and configuration.
FROM: Sets the base image (e.g.,FROM ubuntu:20.04)RUN: Executes shell commands during build timeCMD: Provides default command to run at container startENTRYPOINT: Defines a fixed command, often combined withCMDfor argsCOPY/ADD: Copies files into the image (ADDsupports URLs and archives)ENV: Sets environment variablesEXPOSE: Documents which ports the container will listen onWORKDIR: Sets the working directory for following instructionsVOLUME: Declares mount points for persistent or shared dataARG: Defines build-time variables, usable duringRUN
Here are some of the most common ones you’ll use explained in more detail:
1. FROM
The FROM instruction usually appears as the first line in your Dockerfile. It specifies an existing image that serves as the base for your build. All following instructions build on the filesystem of the referenced image.
FROM ubuntu:20.04
2. COPY
The COPY instruction adds files and folders to your image’s filesystem. It copies files from your Docker host into the image. Containers created from the image include all copied files.
COPY main.js /app/main.js
The first argument specifies the source path on your host. The second argument defines the destination path inside the image. You can also copy files from another Docker image using the --from flag:
# Copies the path /usr/bin/composer from the composer:2 image
COPY --from=composer:2 /usr/bin/composer composer
3. ADD
The ADD instruction works like COPY but also supports remote file URLs and automatic archive extraction. It extracts archives into the destination path inside the container. Supported formats include gzip, bzip2, and xz.
ADD http://example.com/archive.tar /archive-content
Although ADD can simplify some tasks, using COPY is generally better. ADD automatically extracts archives, which may produce unexpected results if you only want to copy files.
4. RUN
The RUN instruction executes a command inside the image while building it. Each RUN creates a new image layer with the filesystem changes applied. You commonly use RUN to install and configure packages.
RUN apt-get update && apt-get install -y nodejs
5. ENV
Use the ENV instruction to set environment variables inside your containers. Specify the variable name and its value, separated by an equals sign.
ENV PATH=$PATH:/app/bin
6. WORKDIR
The WORKDIR instruction sets the working directory for all subsequent instructions in the Dockerfile. It avoids repeatedly using absolute paths, improves readability, and reduces errors. If the directory does not exist, Docker creates it automatically.
WORKDIR /app
COPY main.js .
RUN node main.js
This sets /app as the working directory so that the COPY and RUN instructions operate relative to it. Using WORKDIR is recommended over hardcoding full paths in each command for maintainability and clarity
Example: Creating and using a Dockerfile
Ready to create a Docker image for your application? Here’s how to get started with a simple Dockerfile.
How to create a Dockerfile
First, create a new directory for your project. Copy the following code and save it as main.js:
const {v4: uuid} = require("uuid");
console.log(`Hello World`);
console.log(`Your ID is ${uuid()}`);Use npm to add the uuid package to your project:
$ npm install uuidNext, copy the following set of Docker instructions, then save them to Dockerfile in your working directory:
FROM node:16
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY main.js .
ENTRYPOINT ["node"]
CMD ["main.js"]Let’s unpack this Dockerfile line-by-line:
FROM node:16– The official Node.js image is selected as the base image. All the other statements apply on top ofnode:16.WORKDIR /app– The working directory is changed to/app. Subsequent statements which use relative paths, such as theCOPYinstructions immediately afterwards, will be resolved to/appinside the container.COPY package.json– The next two lines copy in yourpackage.jsonandpackage-lock.jsonfiles from your host’s working directory. The destination path is., meaning they’re deposited into the in-container working directory with their original names.RUN npm install– This instruction executesnpm installinside the container’s filesystem, fetching your project’s dependencies.COPY main.js .– Your app’s source code is added to the container. It happens after theRUNinstruction because your code will usually change more frequently than your dependencies. This order of operations allows more optimal usage of Docker’s build cache.ENTRYPOINT ["node"]– The image’s entrypoint is set so thenodebinary is launched automatically when you create new containers with your image.CMD ["main.js"]– This instruction supplies arguments for the image’s entrypoint. In this example, it results in Node.js running your application’s code.
How to build a Dockerfile
Now you can use docker build to build an image from your Dockerfile. Run the following command in your terminal:
$ docker build -t demo-image:latest .Wait while Docker builds your image. The sequence of instructions will be displayed in your terminal.
Docker build options
The docker build command takes the path to the build context as an argument. The build context defines which paths you can reference within your Dockerfile. Paths outside the build context will be invisible to Dockerfile instructions such as COPY. It’s most common to set the build context to ., as in this example, to refer to your working directory.
Docker automatically looks for instructions in your working directory’s Dockerfile, but you can reference a different Dockerfile with the -f flag:
$ docker build -f dockerfiles/app.dockerfile -t demo-image:latest .The -t flag sets the tag which will be assigned to your image after the build completes. If you need to add multiple tags, you can repeat the flag.
How to use a Dockerfile
With your image built, start a container to see your code execute:
$ docker run demo-image:latest
Hello World!
Your ID is 606c3a30-e408-4c77-b631-a504559e14a5The image’s filesystem has been populated with the Node.js runtime, your npm dependencies, and your source code, based on the instructions in your Dockerfile.
Dockerfile best practices
Writing a Dockerfile for your application is usually relatively simple, but there are some common gotchas to watch out for. Here are 10 best practices to maximize usability, performance, and security.
1. Don’t use latest for your base images
Using an image such as node:latest in your FROM instructions is risky because it can expose you to unexpected breaking changes. Most image authors immediately switch latest to new major versions as soon as they’re released. Rebuilding your image could silently select a different version, causing a broken build or malfunctioning container software.
Selecting a specific tag such as node:16 is safer because it’s more predictable. Only use latest when there’s no alternative available.
2. Only use trusted base images
Similarly, it’s important to choose trusted base images to protect yourself from backdoors and security issues. Your image includes the content of the image referenced by your FROM instruction; compromised base images could contain malware that runs inside your containers.
Where possible, try to use Docker Hub images that are marked as official or submitted by a verified publisher.
3. Use HEALTHCHECK to enable container health checks
Health checks tell Docker and administrators when your containers enter a failed state. Orchestrators such as Docker Swarm and Kubernetes can use this information to restart problematic containers automatically.
Enable health checks for your containers by adding a HEALTHCHECK instruction to your Dockerfile. It sets a command Docker will run inside the container to check whether it’s still healthy:
HEALTHCHECK --timeout=3s CMD curl -f http://localhost || exit 1The healthiness of your containers is displayed when you run the docker ps command to list them:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
335889ed4698 demo-image:latest "httpd-foreground" 2 hours ago Up 2 hours (healthy)4. Set your ENTRYPOINT and CMD correctly
ENTRYPOINT and CMD are closely related instructions. ENTRYPOINT sets the process to run when a container starts, while CMD provides default arguments for that process. You can easily override CMD by setting a custom argument when you start containers with docker run.
In the example Dockerfile created above, ENTRYPOINT ["node"] and CMD ["main.js"] result in node main.js executing when the container is started with docker run demo-image:latest.
If you ran docker run demo-image:latest app.js, then Docker would call node app.js instead.
Read more about the differences between Docker ENTRYPOINT and CMD.
5. Don’t hardcode secrets into images
Dockerfiles should not contain any hardcoded secrets such as passwords and API keys. Values set in your Dockerfile apply to all containers using the image. Anyone with access to the image can inspect your secrets.
Set environment variables when individual containers start instead of providing defaults in your Dockerfile. This prevents accidental security breaches.
6. Label your images for better organization
Teams with many different images often struggle to organize them all. You can set arbitrary metadata on your images using the Dockerfile LABEL instruction. This provides a convenient way to attach relevant information that’s specific to your project or application. By convention, labels are commonly set using reverse DNS syntax:
LABEL com.example.project=api
LABEL com.example.team=backendContainer management tools usually display image labels and let you filter to different values.
7. Set a non-root user for your images
Docker defaults to running container processes as root. This is problematic because root in the container is the same as root on your host. A malicious process that escapes the container’s isolation could run arbitrary commands on your Docker host.
You can mitigate this risk by including the USER instruction in your Dockerfile. This sets the user and group that your container will run as. It’s good practice to assign a non-root user in all of your Dockerfiles:
# set the user
USER demo-app
# set the user with a UID
USER 1000
# set the user and group
USER demo-app:demo-group8. Use .dockerignore to prevent long build times
The build context is the set of paths that the docker build command has access to. Images are often built using your working directory as the build context via docker build ., but this can cause redundant files and directories to be included.
Paths that aren’t used by your Dockerfile or that will be recreated inside the container by other instructions should be removed from the build context to improve performance. This will save time when Docker copies the build context at the start of the build process.
Add a .dockerignore file to your working directory to exclude specific files and directories. The syntax is similar to .gitignore:
.env
.local-settings
node_modules/9. Keep your images small
Docker images can become excessively large. This slows down build times and increases transfer costs when you move your images between registries.
Try to reduce the size of your images by installing only the minimum set of packages required for your software to function. It also helps to use compact base images when possible, such as Alpine Linux (5 MB), instead of larger distributions like Ubuntu (28 MB).
10. Lint your Dockerfile and scan images for vulnerabilities
Dockerfiles can contain errors that break your build, cause unexpected behavior, or violate best practices. Use a linter such as Hadolint to check your Dockerfile for problems before you build.
Hadolint is easily run using its own Docker image:
$ docker run --rm -i hadolint/hadolint < DockerfileThe results will be displayed in your terminal.
You should also scan built images for vulnerabilities. Container scanners such as Trivy can detect outdated packages and known CVEs inside your image’s filesystem. Running a scan before you deploy helps prevent exploitable containers from reaching production environments.
Conclusion
The Dockerfile is the foundation of efficient containerization. It defines every step needed to build and configure Docker images, ensuring consistency and automation across development and production environments.
By mastering Dockerfile instructions — from FROM and RUN to WORKDIR and ENTRYPOINT — you can create optimized, secure, and reproducible containers tailored to your applications. Following best practices like using trusted base images, minimizing image size, adding health checks, and leveraging .dockerignore ensures your Docker builds are both fast and reliable.
In short, understanding how to write and manage Dockerfiles effectively is one of the most valuable skills for any developer or DevOps engineer working in modern cloud-native environments. Start small, experiment with simple Dockerfiles, and gradually refine your builds into production-grade, high-performance container images.


