Dockerizing an application refers to the process of producing a Docker image for the application that contains everything the application needs to function correctly (dependencies, environmental variables, etc), and then using the image to create a Docker container for the application.
A Docker image is a blueprint for a Docker container. It's like a snapshot of your application and its environment that can be replicated across multiple machines. In this guide, you will learn how containerize your Node.js applications with Docker.
Prerequisites
To follow through with this tutorial, you need to have a recent version of
Node.js and npm
installed on your computer
server. You also need to ensure that
Docker is also installed on your machine.
The following versions were used while testing this tutorial:
- Docker version 20.10.14, build a224086 (
docker -v
). - Node.js v16.14.2.
- npm v8.6.0.
Step 1 — Downloading the demo project
To demonstrate the steps involved in deploying Node.js applications with Docker, we will utilize a simple Node.js app that presents a random Chuck Norris joke in your browser. Go ahead and clone it to your machine through the command below:
git clone https://github.com/betterstack-community/chucknorris
Afterward, cd
into the chucknorris
directory and download the application's
dependencies through the commands below:
cd chucknorris
npm install
You can start the application through the following command and go to http://localhost:3000 in your browser to see it in action.
npm run dev
You have now set up a working Node.js application that is ready to be deployed using Docker. In the next steps, we'll take look at how you can build a Docker image for this app and run it inside a Docker container.
Step 2 — Creating a Dockerfile
A Dockerfile is a text document that contains instructions for assembling a Docker image, and these instructions are executed in the order in which they are written. The format of this file is shown below:
# Comment
COMMAND arguments
Any line that begins with a #
is a comment (except
parser directives),
while other lines must contain a specific command followed by its arguments.
Although command names are not case-sensitive, they are often written in
uppercase to distinguish them from arguments.
The first (non-comment) line in the Docker file must indicate the parent image that should be used as the foundation for our custom image. Subsequent commands are executed on this parent image, and the result of each successive instruction adds a new layer to this image before the final image is built and its ID is outputted to the console.
Go ahead and create a Dockerfile
for your application and open it in your text
editor using the command below:
nano Dockerfile
Paste the following contents into the file:
# Use Node 16 alpine as parent image
FROM node:16-alpine
# Change the working directory on the Docker image to /app
WORKDIR /app
# Copy package.json and package-lock.json to the /app directory
COPY package.json package-lock.json ./
# Install dependencies
RUN npm install
# Copy the rest of project files into this image
COPY . .
# Expose application port
EXPOSE 3000
# Start the application
CMD npm start
Here's an explanation of what each line in the file indicates:
# Use Node 16 alpine as parent image
FROM node:16-alpine
The first instruction in a Dockerfile
involves selecting a base image which is
the
official Node.js Alpine Linux image for v16.x.
If you take a look at the
Dockerfile
for this image, you'll notice that it does all of the work of setting up a
Node.js environment for you so that you don't need to spend time on such details
when creating a Docker image for your Node.js app. All subsequent instructions
in this file will be committed on top of our chosen base image.
# Change the working directory on the Docker image to /app
WORKDIR /app
The WORKDIR
command defines the working directory of a Docker image for any
RUN
, CMD
, ENTRYPOINT
, COPY
, or ADD
instructions that follow it in the
file. This directory will be created if it doesn't exist already.
# Copy package.json and package-lock.json to the /app directory
COPY package.json package-lock.json ./
This COPY
command copies the package.json
and package-lock.json
files from
the project directory on your machine to the filesystem of the container in the
current working directory which is /app
as indicated by the previous WORKDIR
instruction.
# Install dependencies
RUN npm install
At this point, the npm install
command will be executed from the /app
directory in the Docker image filesystem. Since it contains the package.json
and package-lock.json
files, it will use the information in both files to
download all the dependencies from the NPM registry.
# Copy the rest of project files into this image
COPY . .
After installing the project's dependencies, the COPY
command is used once
again to copy the rest of the project files to the /app
directory on the
Docker image filesystem.
# Expose application port
EXPOSE 3000
Afterward, the EXPOSE
command informs Docker that our application will listen
on port 3000
at runtime. You can use the TCP or UDP protocol here (as in
3000/tcp
or 3000/udp
), although TCP is the default if the protocol is
unspecified.
# Start the application
CMD npm start
Finally, the CMD
instruction is used start the application by running the
start
script specified in the package.json
. This command is executed when
the container based on this Docker image is launched.
At this point, we have specified all the necessary instructions for building a Docker image for our project. In the next step, we will execute the instructions to build the image for the first time.
Step 3 — Building the Docker image
The docker build
command is used to build a Docker image from a Dockerfile
.
Run the command below from your project root to build the Docker image for our
project:
docker build . -t chucknorris
. . .
Successfully built cd4bdd2ae572
Successfully tagged chucknorris:latest
The command above builds a Docker image using the Dockerfile
in the current
directory. The -t
flag is used to set the tag name for the new image so that
it may be referenced later as chucknorris:latest
.
You can now run the docker images
command to view some information on the
Docker image all the Docker images on your machine, or pass an image repository
and tag to only display info about a specific image:
docker images chucknorris:latest
REPOSITORY TAG IMAGE ID CREATED SIZE
chucknorris latest cd4bdd2ae572 7 minutes ago 135MB
Step 4 — Running your Docker image as a container
In the previous section, we created a Docker image that contains our Node.js project. We can now run that image in a Docker container and test if our application is running correctly. Ensure to kill any running instances of your application before executing the command below:
docker run -p 3000:3000 chucknorris
You should observe the following output:
> chucknorris@1.0.0 start
> node server.js
chucknorris server started on port: 3000
When you run a Docker image in a container, it creates a typical operating
system process that has its filesystem, networking, and process tree separate
from the host machine. Although we used the EXPOSE
command in step 2 to
indicate that the application running inside the container will listen on port
3000
, this command does not make the container's port accessible from the
host. It only ensures that another Docker container running on the same host can
access the application running on the specified port.
To make the container ports available to the host machine, you need to publish
it through the --publish
or -p
flag. It lets you to map a container's port
to a corresponding host port. For example, in the previous command, port 3000
on the host is mapped to port 3000
on the container so that all requests made
to http://localhost:3000 are forwarded to the
application listening on port 3000 in the Docker container.
Try it out by opening http://localhost:3000 in your browser. You should observe the Chuck Norris application working as usual.
You should also observe new log entries in the terminal instance where you
executed the docker run
command:
. . .
GET / 200 940 - 858.659 ms
GET /css/style.css 304 - - 1.365 ms
GET /javascript/script.js 304 - - 0.675 ms
GET / 200 951 - 825.021 ms
GET /css/style.css 304 - - 0.704 ms
GET /javascript/script.js 304 - - 0.516 ms
GET / 200 994 - 1247.833 ms
GET /css/style.css 304 - - 1.191 ms
GET /javascript/script.js 304 - - 0.660 ms
. . .
Note that you can bind any host port you want, so you don't have to use port
3000
on the host machine. For example, let's bind port 8080 on the host
instead:
docker run -p 8080:3000 chucknorris
> chucknorris@1.0.0 start
> node server.js
chucknorris server started on port: 3000
At this point, you'll be able to access the application on port 8080 instead of port 3000 as before.
You can also bind your Docker container to multiple host ports by specifying
multiple -p
arguments:
docker run -p 3000:3000 -p 8080:3000 chucknorris
This will cause the Chuck Norris application to become accessible on both port 8080 and port 3000 on the host machine.
Step 5 — Running your Docker container in detached mode
You'll notice that when we used the docker run
commands in the previous
section, our terminal instance was connected to the Docker container. It's not
ideal for our web server process to be tied to a specific terminal instance,
therefore, we will run it in the background using the --detach
or -d
flag.
Ensure to kill any running instance of your Docker container with Ctrl-C
before executing the command below:
docker run -d -p 3000:3000 chucknorris
5441f8b2532b111e9dea8aeb55563c24302df96710c411c699a612e794e89ee4
Docker will launch your container as before, but instead of connecting your terminal instance to the container, it will print the container ID and return you to your terminal prompt. You can use this container ID to access details about this container in subsequent commands.
Right now, you can use the docker ps
command to view all the running
containers on your machine:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5441f8b2532b chucknorris "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp boring_gould
The ps
command presents some information about the running containers on your
machine. You can see the container ID, the Docker image running inside the
container, the command used to start the container, when it was created, its
current status, the ports exposed by the container, and the container name.
Docker assigns a random name to the container by default, but we can change this
by using the --name
flag.
Stop running your container first by using the provided name in the NAME
column:
docker stop boring_gould
boring_gould
Afterward, use the rm
command to delete it:
docker rm boring_gould
boring_gould
You can now start it again and provide a --name
argument this time around:
docker run -d -p 3000:3000 --name chucknorris-server chucknorris
When you run docker ps
once more, you'll notice that the NAME
column
reflects your argument to the --name
flag. Henceforth, you can identify your
running container through the chucknorris-server
name.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0bd97af5b2e chucknorris "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp chucknorris-server
Step 6 — Viewing your container logs
When you run your Docker container in detached mode, you're no longer able to
view your application logs in the console since the process is being run in the
background. Docker provides the logs
command to view the logs of a running
container, and you can use it to monitor your Chuck Norris server as shown
below:
docker logs chucknorris-server
> chucknorris@1.0.0 start
> node server.js
chucknorris server started on port: 3000
GET / 200 981 - 1763.992 ms
GET /css/style.css 304 - - 2.767 ms
GET /javascript/script.js 304 - - 0.855 ms
You can also continuously monitor container log files as they're written by
including the -f
or --follow
flag:
docker logs chucknorris-server -f
If you want to learn more about how logging works in Docker containers and some best practices for collecting and storing log entries emitted by the applications running in such containers, check out our article on logging in Docker for a more comprehensive discussion of the topic.
Step 7 — Sharing Docker images with others
After building a Docker image for your application, you might want to transfer
it to a different machine or share it with a colleague so that they can easily
run the application without having to build it all over again with
docker build
. This also ensures that everyone's machine is running the exact
same software without any variations which can help with avoiding the "it works
on my machine" problem.
There are two major ways to share a Docker image. The first one involves utilizing container registries from Docker, GitLab, Google Cloud, RedHat, and others. You can also set up a private container registry for your organization to easily share Docker images with teammates.
Once you've set up the registry you'd like to use, you can use the docker push
command to send the Docker image to the registry, and docker pull
to retrieve
the image from the registry. Let's try it out by pushing our chucknorris
image
to the official Docker registry. You need to
sign up for a free account first.
Once you're done with the signup procedure, find the Repositories entry in the top navigation, and click the blue Create Repository button.
Give your repository a name (such as chucknorris
), and choose the visibility
of the repo. Note that free accounts are limited to just one private repository.
Afterward, click the blue Create button.
Once your repo is created, execute the command below to log into Docker Hub on your server. Enter the username and password combo that you used to set up your Docker Hub account.
docker login
Once the login succeeds, you can now push your Docker image to your repository.
Before you use the docker push
command, ensure that your image tag matches
your repository namespace (<your_docker_hub_username>/<your_docker_hub_repo>
)
because the push
expects its argument to match this format. You can create a
new tag on an image using the command below:
docker tag chucknorris <your_docker_hub_username>/chucknorris
After executing the command above, you can run docker images
to view the
changes:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ayoisaiah/chucknorris latest cd4bdd2ae572 43 hours ago 135MB
chucknorris latest cd4bdd2ae572 43 hours ago 135MB
node 16-alpine 59b389513e8a 13 days ago 111MB
You are now ready to push your Docker image to the remote repository. Enter the
command below to push the chucknorris
image to your repo:
docker push <your_docker_hub_username>/chucknorris
Using default tag: latest
The push refers to repository [docker.io/ayoisaiah/chucknorris]
f4a6dd0924eb: Pushed
d00ab9f1c441: Pushed
01e349f65d42: Pushed
e38d70150f2d: Pushed
9c8958a02c6e: Mounted from library/node
b5a53db2b893: Mounted from library/node
cdb4a052fad7: Mounted from library/node
4fc242d58285: Mounted from library/node
latest: digest: sha256:79b31c0e3bd66d8b000bbe9740cf10e8e04f1598fbf878f763ba55713800f2fd size: 1995
Once the upload is done, you can download the image on a different machine using
the docker pull
command. You need to make sure you're logged in first if the
image of interest is in a private repository.
docker pull <your_docker_hub_username>/chucknorris
Another way to share Docker images with others without using a registry is by
exporting it to a .tar
archive as shown below:
docker save chucknorris > chucknorris.tar
You should observe a new chucknorris.tar
archive in your current working
directory that contains everything needed to recreate the image. You can now
transfer this archive to another machine through any method you wish and run
docker load
on the target machine to import the archive's contents and add it
to your list of local images:
docker load < chucknorris.tar
Loaded image: chucknorris:latest
Using a Docker registry to share images is probably best for frequent use, but converting an image to a tar archive can come in handy for long-term storage or for a quick transfer between local machines.
Step 8 — Configuring a Docker CI/CD pipeline with GitHub Actions
Building and testing Docker images can get tedious really quickly if you do it often, but setting up a CI/CD pipeline for automating this process can help. In this step, you'll set up a GitHub Actions workflow for building the Docker images and pushing them to Docker Hub so that they're ready to deploy at any time. This section assumes that you have some basic familiarity with GitHub Actions.
Start by returning to the Docker Hub website to create a Personal Access Token that will allow you to access your Docker Hub account resources. Head over to the security settings page, and click the New Access Token button.
Give your token a description (such as chucknorrisci
), and set its permissions
to Read, Write, Delete. Click the Generate button once you're done.
Your access token will be displayed in the resulting dialog. Make sure you copy it and store in a safe place as it will be displayed only once.
Now, go to the GitHub repository for your project and go to Settings → Secrets → Actions.
Click the New repository secret button and enter your Docker Hub Access
Token with its Name field set to DOCKER_HUB_ACCESS_TOKEN
.
You can also add a secret for your Docker Hub username under the
DOCKER_HUB_USERNAME
name:
Afterward, your Actions secrets page should look like this:
You are now ready to set up the GitHub Actions workflow for your repository.
Return to the command line, and make sure you're in the root of your project.
Create the .github/workflows
directory in your project using the command
below:
mkdir -p .github/workflows
Create a docker.yml
file in the .github/workflows
directory and open it in
your text editor:
sudo nano .github/workflows/docker.yml
Paste the following contents into the file:
name: Create Docker image
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/chucknorris:latest
This file configures GitHub Actions to run the workflow defined in the jobs
section on every push
against the main branch of the repository. It runs on
the latest Ubuntu instance available and the runs the following steps:
- Checkout the repository to so that the workflow can access it.
- Log in to Docker Hub using the previously configured access token and username secrets.
- Use the Docker Buildx Action to create a builder instance using a BuildKit container.
- Build the Docker image and push it to Docker Hub.
Save the file and exit your editor, then stage, commit and push your changes to GitHub.
git add .github/workflows/docker.yml
git commit -m 'Add Docker image workflow'
git push origin main
Afterward, return to your GitHub repository and confirm that your workflow run was successful under the Actions tab.
You can also verify that the Docker Hub repository was updated by viewing the
chucknorris
repo you created earlier. It should show that it was updated
recently.
At this point, you've set up a GitHub Actions pipeline that builds a Docker
image for you and uploads it to Docker Hub each time you push to the main
branch of your GitHub repository. You can tweak the workflow file to trigger the
Docker image job on pull requests, new tags or releases, or using any other
available triggers.
Conclusion
In this tutorial, you learned how to prepare a Docker image for your Node.js application and how to deploy it using Docker containers. You also learned how to automate the process of building Docker images and pushing to a registry so that you can quickly deploy your application at anytime by pulling the image on your server and running it in a container.
We hope this article has helped you get started with utilizing Docker for Node.js applications. There's a lot more to learn and explore when it comes to Docker so ensure to consult the official documentation to learn more about recommended practices and lots more.
Don't forget to grab the entire source code used in this tutorial on GitHub. Thanks for reading!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github