We all know that getting an application up and running on a different machine is no simple task. You have to run through a lot of setup work, right from setting up the environment variables needed and the dependencies required to the runtime. This is even not hard until we get to the point of automating the whole process of deployment.
This is a major problem in software development and a couple of technologies have come up trying to solve this challenge of differing environments, deployment configurations, and automation. Docker is the mostly used and effective solution.
To learn about the basics of docker, check out my intro blog to docker Here.
What is Dockerization?
This is the process of packaging your application, together with all its dependencies and environments into a container (a completely isolated running process).
Dockerizing an application involves identifying and specifying everything that your applications need to run into a Dockerfile and then building a Docker image using it. A Docker image is an environment that can be replicated and is guaranteed to run on other machines.
Dockerizing an application simply involves 3 steps
- Preparing a
Dockerfile
. - Building a Docker
image
. - Running a docker
container
from the image.
So let's dive into dockerizing a Node.js app
Setting Up a Demo Node.js App
To make the demo for the concepts in this article, we will use a demo nodejs app that provides an endpoint for fetching posts. The app uses the JSONPlaceholder fake API. You can clone the app into your computer using the following command
git clone https://github.com/marville001/docker-nodejs-demo
Once you have downloaded the app, cd
into the project folder and run npm install
to install all the required dependencies. The app has one file index.js which should look as shown below.
const express = require("express");
const fetch = require("node-fetch");
const app = express();
app.use(express.json());
app.get("/posts", async (req, res, next) => {
try {
const URL = "https://jsonplaceholder.typicode.com/posts?_limit=5";
const response = await fetch(URL);
const data = await response.json();
res.status(200).send(data)
} catch (error) {
res.status(500).json({
message: "Failed To Get Posts",
error: error.message,
});
}
});
const PORT = process.env.PORT || 9000;
app.listen(PORT, () => console.log(`App running on post ${PORT}`));
Run the command npm start
to start the application and in your browser go to http://localhost:9000/posts
to view a list of 5 posts.
Creating a Dockerfile
There are many ways to use Docker, but the best way is through the creation of Dockerfiles. A Dockerfile essentially gives build instructions to Docker when you build a container image.
To get started, we need to specify which base image to pull from. We will specify the base image to be the official Node image since it gives us what we need to run our application and has a small footprint. To be specific we will use node:16-alpine
Create a file called Dockerfile:
# Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package*.json .
RUN npm install
# Copy the source files into the image
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
The Dockerfile consists of the following commands:
- FROM: tells Docker what base image to use as a starting point. We specifies the base image to be the official Node.js Alpine Linux image. Alpine Linux is used here due to its small size, which helps a lot when transporting images from one machine to the other
- WORKDIR: changes the active directory. For our case, it sets the working directory to
/app
. This directory will be created if it doesn't exist. - RUN: executes commands inside the container.
- EXPOSE: tells Docker which ports should be mapped outside the container. We need to expose the port
9000
that the application will run on through this instruction: - CMD: defines the command to run when the container starts.
Building images should be fast, efficient, and reliable. Every command you execute results in a new layer that contains the changes compared to the previous layer. These layers will stack on top of each other, adding functionality incrementally.
Create a file called .dockerignore
:
.git
.gitignore
node_modules/
The .dockerignore is similar to a .gitignore file and lets us safely ignore files or directories that shouldnโt be included in the final Docker build.
Build the Docker Image
Now that the Dockerfile is complete, it's time to build the Docker image according to the instructions in the file. This is achieved through the docker build command. You need to pass in the directory where the Dockerfile exists and your preferred name for the image:
$ docker build -t demo-app .
You can run docker images
to view some basic info about the created image:
Run the Docker Image in a Container
Use the docker run command to run your newly minted Docker image inside of a container. Since the application has been built into the image, it has everything it needs to work. It can be launched directly in an isolated process. Before you can access your running image inside the container, you must expose its port to the outside world through the --publish or -p flag. This lets you bind the port in the container to a port outside the container.
docker run -p 9000:9000 demo-app
The command above starts the demo-app image inside of a container and exposes port 9000 inside the container to port 9000 outside the container. You can access the posts through http://localhost:9000/posts
.
That's all from this article. Keep coding :)
Thank you