If you are a total beginner, start with this: Getting Started with Docker (Understanding virtualization & containers in the simplest way)
Containers let you package your app with everything it needs. But how do you build one from scratch? In this guide, we'll dockerize a Node.js app using a custom Dockerfile.
We'll build a Node.js app that fetches live data from GitHub and serves it via an Express server. Then we'll walk through every line of the Dockerfile that packages it.
Before you begin, make sure you have Docker installed and properly set up. You can follow the steps in this article.
What is a Dockerfile?
A Dockerfile is a plain text file that defines how your application will be built and run inside a Docker image. It includes everything the image needs: the base image to start from, any files to copy in, commands to run during setup, and how to start the app when the container runs.
You can think of it as a build script; each instruction adds a new layer that eventually forms the final image.
The Node.js app we'll be dockerizing
We'll containerize a Node.js app built with Express. It fetches data from GitHub using Axios and serves it on the homepage. You’ll see how everything works step by step, from setting up the project to running it locally before we dockerize it.
Start by creating a folder named stars_generator and adding two files inside:
package.json
server.js
1. The "package.json" file
This file defines your app’s metadata, including the dependencies it needs and how to start it. We’ll use express
and axios
, and tell Node.js to run server.js when the app starts.
Create package.json with the following content:
{ "name": "docker-node-advanced", "version": "1.0.0", "scripts": { "start": "node server.js" }, "dependencies": { "express": "^4.18.2", "axios": "^1.4.0" } }
2. The "server.js" file
This is the main entry point of the app. It uses Express to serve a homepage, fetches GitHub repo data using Axios, and displays the star count in your browser.
Add this code to server.js:
const express = require('express'); const axios = require('axios'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', async (req, res) => { try { const response = await axios.get('https://api.github.com/repos/nodejs/node'); res.send(`<h1>Node.js GitHub Stars</h1><p>This repo has ${response.data.stargazers_count} ⭐️ stars.</p>`); } catch (error) { res.status(500).send('Failed to fetch data from GitHub'); } }); app.listen(PORT, () => { console.log(`Server running on http://localhost:${PORT}`); });
What each part of this file does:
const express = require('express');
– loads the Express framework to handle HTTP requests.const axios = require('axios');
– loads Axios to make API calls.const app = express();
– creates a new Express app.const PORT = process.env.PORT || 3000;
– sets the port from your environment or defaults to 3000.app.get('/', async (req, res) => { ... })
– defines a route for the root URL (/). When visited, it fetches data from GitHub.const response = await axios.get(...)
– sends a GET request to GitHub’s API for the Node.js repo.res.send(...)
– sends back HTML with the current star count.catch (...)
– catches any errors and returns a 500 status code.app.listen(...)
– starts the server and logs a message with the local URL.
3. Run the application to test it
Before we containerize anything, test the app locally to make sure everything works as expected.
In your terminal:
npm install node server.js
Then open your browser and go to:
http://localhost:3000
You should see something like this:
If the app crashes or doesn’t respond correctly, fix that first before moving forward. It’s better to start with a working app before introducing Docker.
In the next step, we'll containerize the app.
Containerizing the application
We’re now going to containerize the Node.js app so that it can run consistently anywhere, on your machine, on someone else’s, or on a production server, without needing to worry about system differences. The goal is to package the app and its environment into a reusable Docker image.
Now let's containerize the app.
Step 1: Create a Dockerfile
To containerize the app, you’ll write a Dockerfile that gives Docker step-by-step instructions for building an image.
Create a file named Dockerfile in the root of your project folder (same level as server.js and package.json). Your folder structure should now look like this:
stars_generator/ ├── Dockerfile ├── package.json └── server.js
You can create it by running:
touch Dockerfile
Then, add this content to the Dockerfile:
# Start with Node.js 22 Alpine base image FROM node:22-alpine # Set the working directory in the container WORKDIR /usr/src/app # Copy package.json and install dependencies COPY package.json ./ RUN npm install # Copy the rest of the application code COPY . . # Expose the port your app runs on EXPOSE 3000 # Start the app CMD ["npm", "start"]
Let’s break it down line by line:
FROM node:22-alpine
This tells Docker to start from the official Node.js v22 Alpine image:
Alpine keeps things lightweight (around 40MB).
Node 22 is a stable version with long-term support (LTS).
The image includes both Node.js and npm.
WORKDIR /usr/src/app
This sets the working directory to /usr/src/app
inside the container. All following commands will be executed from this path. It helps keep things clean and consistent.
COPY package.json ./ RUN npm install
The
COPY
command copies your local package.json into the container.Then RUN npm install installs your dependencies inside the container.
We do this before copying the rest of the code so Docker can cache this step. That way, if your code changes but your dependencies don’t, Docker won’t re-run
npm install
.
COPY . .
This copies all the remaining files, including your server.js, into the container.
EXPOSE 3000
This includes files like
app.js
,routes/
,controllers/
, etc.
# Expose the port your app runs on EXPOSE 3000
It informs Docker (and anyone running this image) that the app listens on port 3000.
This doesn't publish the port; it's just documentation within the Docker image. You'll still need to map it using
-p
or--publish
when running the container.
# Start the app CMD ["npm", "start"]
It defines the default command Docker will run when a container is created from this image.
- This runs
npm start
, which typically starts your Node.js server. - Make sure you've defined a
start
script in your package.json, like this:
"scripts": { "start": "node server.js" }
Step 2: Add a ".dockerignore" file
This step prevents unnecessary files like node_modules, .env, .git, logs) from being copied into the image. And it should come before building the image for smaller and faster builds including fewer security risks and cleaner containers.
Create a .dockerignore file in your project root with the following contents:
node_modules npm-debug.log Dockerfile .dockerignore .git *.md .env
See the reasons why we added those to the .dockerignore file:
-
node_modules
: You'll runnpm install
inside the container instead, and so you don't want to copy bulky local dependencies. -
npm-debug.log
: Log files aren't needed inside the image. -
Dockerfile
&.dockerignore
: These aren't usually needed inside the image. -
.git
: You don't need Git history or config inside a Docker image. -
*.md
: Docs aren't needed unless they're part of the app logic. -
.env
: For security reasons, secrets and environment variables should be passed in at runtime.
Step 3: Build and run the container
Now that the Dockerfile and .dockerignore
are ready, let's move on to the following:
Build the Docker image
We're building the Docker image to package our Node.js app, its dependencies, and environment into a single, reusable unit, so that it can run consistently anywhere, regardless of the host system.
In your terminal, from the root of your project, run:
docker build -t <name_of_your_app> .
Replace the placeholder "" with the name of the app, like this:
docker build -t stars_generator .
See what each part of this command means:
-
-t stars_generator
: This tags the image with the namestars_generator
, so you can later refer to it easily when running or managing containers (e.g.,docker run stars_generator
). It's like giving your image a label or shortcut. -
.
: This tells Docker to look in the current directory (where your terminal is open and where the Dockerfile is located) for everything it needs to build the image, including the Dockerfile and source code.
If the command runs successfully, Docker processes the Dockerfile line by line, creating an image that can later be run as a container.
Make sure your docker daemon is running, otherwise you'll get this error when you run the command:
This error means Docker is installed, but the Docker daemon (background service) isn't running, so your system can't build images or manage containers.
So, to avoid or tackle this error, and make sure your docker daemon is running, follow the steps in this article
Or you can simply do the following to fix it:
1) If you're on macOS and using Docker Desktop:
- Open Docker Desktop and wait for it to fully start. You'll usually see the Docker icon in your menu bar with a green dot once it's ready.
- Then rerun the command.
2) If you're on linux:
- You may need to run:
sudo systemctl start docker
And possibly:
sudo usermod -aG docker $USER
3) If Docker Desktop isn't installed, install it from here
When you run the command, you should see the following to show that the build was successful:
So this shows that Docker built your custom image from the Dockerfile without any errors.
What the build output means, line by line
Let's walk through the major steps you see in the terminal
-
[internal] load build definition from Dockerfile
: Docker is reading your Dockerfile and preparing the build context. -
transferring dockerfile: 423B
: It reads the Dockerfile (423 bytes in size). -
load metadata for docker.io/library/node:22-alpine
: It pulls the Node.js v22 Alpine base image from Docker Hub. -
extracting sha256:...
: Docker unpacks (extracts) layers of the Node.js base image. -
[2/5] WORKDIR /usr/src/app
: Sets the working directory in the container to/usr/src/app
. -
[3/5] COPY package.json ./
: Copies onlypackage.json
so Docker can cache dependencies. -
[4/5] RUN npm install
: Installs the dependencies listed in yourpackage.json
. -
[5/5] COPY . .
: Copies the rest of your project files (likeserver.js
). -
exporting to image
: Your custom image is finalized and saved with the tagstars_generator
.
So, the summary is you now have a reusable Docker image called stars_generator that contains your Node.js app and its dependencies.
Run the container
Now that the image is built, your next step is to run the container. This step is important because it allows you to start and test your Node.js app inside an isolated Docker environment using the image you just built.
So, run this command:
docker run -p 3000:3000 stars_generator
What this command means:
-
docker run
: Tells Docker to start a new container. -
-p 3000:3000
: Maps port 3000 on your machine (host) to port 3000 inside the container, so you can access your app athttp://localhost:3000
. -
stars_generator
: The name of the image you built earlier.
In simple terms: you're launching your app in Docker and making it available at localhost:3000 on your browser.
This is what you should see when you run the command:
What this output means is that your Docker container successfully started and ran the Node.js app inside it. Specifically:
> docker-node-advanced@1.0.0 start > node server.js
This shows that the container is executing the start
script from your package.json
, which runs server.js
.
Then this line:
Server running on http://localhost:3000
...means your app is now live and listening on port 3000. Since you mapped that port to your local machine with -p 3000:3000, you can open your browser and visit:
http://localhost:3000
to test the app running inside Docker.
But wait, we're not done because you might be asking: "we could already run the node.js app locally so what's the essence of these steps?" you'll find out in the next section.
Why we containerized the Node.js app
You might wonder: we could already run the app with node server.js, so why go through the trouble of Dockerizing it?
Let's see the reasons why:
1. Consistent environment everywhere
Docker lets you package your app with its runtime, dependencies, and system libraries into a single image. That means:
It runs exactly the same on any system like Windows, macOS, or Linux.
No more “works on my machine” issues.
2. Easy sharing and deployment
Once you have a container image:
You can push it to Docker Hub (a public or private container registry).
Anyone can pull and run it without needing to clone your code or install Node.js.
This is especially useful in:
Teams with different dev machines
CI/CD pipelines
Deploying to production (on cloud or Kubernetes)
3. Safe isolation
The app runs inside a container, isolated from the rest of your system. No conflicts with global Node versions or ports. You can stop and delete containers without affecting anything on your machine.
How to push your Docker image to Docker Hub
Since this is a small project and for personal learning, this step is optional. However, it's important to do it at least once to understand the workflow. See the reasons why:
- You'll see how teams share and deploy containerized apps.
- You'll be able to pull your image from any machine without copying your code.
- You'll get familiar with real-world Docker usage which is very useful for DevOps, backend or platform roles.
If you're using the free tier on Docker Hub, just be mindful not to push too many unused images. But for learning? It's definitely not a waste.
Now see the steps to push your Docker image to Docker Hub
Step 1: Create an account on Docker Hub
If you haven't already, go to the Docker Hub website, sign up and create a Docker ID (which is your username).
Step 2: Tag the image with your Docker Hub username
Let’s say your Docker Hub username is your_dockerhub_username
. You’ll tag the image like this:
docker tag stars_generator <your_dockerhub_username>/stars_generator
This names your image so Docker knows where to push it.
Make sure your terminal is in the project directory (where your Dockerfile is).
Then tag the image for Docker Hub (replace with your actual Docker Hub username)
This command doesn’t depend on any specific files in the directory, it works because Docker knows the image you built (by name) and assigns it a new tag that includes your Docker Hub namespace.
Why tagging is important:
Docker needs to know where to push the image. stars_generator is your local image name, but /stars_generator tells Docker to push it to your personal namespace on Docker Hub.
Step 3: Log in to Docker from your terminal
Before pushing the image to Docker Hub, you need to authenticate your Docker CLI with your Docker Hub account. Run:
docker login
You’ll be prompted to enter your Docker Hub username and password.
This step is necessary because Docker won’t let you push images to your account unless you’re logged in, it needs to confirm that you have permission to upload to the namespace you tagged the image with (e.g., /stars_generator).
Note: If you're using Docker Desktop, you may already be signed in there, but it's still best to run docker login from the terminal to be sure your CLI is authenticated.
Once you run the docker login command in your terminal, Docker may prompt you to authenticate using a web-based login flow, especially if you're using Docker Desktop or haven’t logged in recently.
You’ll see something like this:
What this means:
- Docker generated a one-time device confirmation code (like AKBT-IEBG) that links your terminal session to your Docker account.
- It also gives you a link: https://login.docker.com/activate
- When you press ENTER, your browser will open that page.
- Once there, log in with your Docker Hub credentials and enter the code shown in your terminal.
This links your terminal session with your Docker Hub account, allowing you to push containers.
Once logged in successfully, you should see this confirmation:
Your terminal session will be authorized to push images under your username until the session expires or you log out. You will see "Login Succeeded" in your terminal:
Step 4: Push the image to Docker Hub
Once you’re logged in, it’s time to upload your Docker image to Docker Hub. You’ll use the docker push command:
docker push <your_dockerhub_username>/stars_generator
This tells Docker to push the image you previously tagged (/stars_generator) to your public Docker Hub repository under your account.
You’ll see Docker upload the image layer by layer. It only pushes the layers that don’t already exist in your Docker Hub account, which saves time.
Once it’s done, your terminal should look something like this:
Now, visit your image directly at:
https://hub.docker.com/repository/docker/your-dockerhub-username/stars_generator/
Replace with your actual Docker Hub username.
You’ll see a page like this:
What you’re looking at:
Tag section: Shows which versions of your image are available (e.g., latest)
Push and pull stats: Tells you how recently the image was pushed and how many times it’s been pulled
Docker command prompt: Gives you the exact docker pull command anyone can use to download the image
General settings: Lets you add a description, set categories, and control collaboration
This makes your image publicly accessible, meaning anyone can now pull it using:
docker pull <your_dockerhub_username>/stars_generator
Why this is important:
Pushing to Docker Hub is useful if you want to:
- Share your app with teammates
- Use your image in a CI/CD pipeline
- Deploy it to a cloud platform like AWS
- Archive a version of your containerized app
Step 5: Anyone can now run your app
Now that your image is on Docker Hub, anyone can pull and run it, no setup or dependencies required.
All they need to do is run these two commands:
docker pull <your_dockerhub_username>/stars_generator docker run -p 3000:3000 <your_dockerhub_username>/stars_generator
Replace with your Docker Hub username.
This will:
Download the image from your Docker Hub repository
Run the container and expose port 3000 on their local machine
Once it’s running, they can open:
http://localhost:3000
…and they’ll see your Node.js app in action - without needing to install Node, npm install, or clone any repo.
So, now your app becomes portable and self-contained. Anyone on any OS can run it the same way either on their laptop, a VM, or a production server.
When should you Dockerize and push your app?
You might have this question in mind:
“Should I finish building and testing my app before Dockerizing and pushing it to Docker Hub?”
In real-world projects, the answer depends on your team’s workflow, deployment goals, and stage of development. Let’s break it down.
1. Finish building and testing first, then Dockerize and push
This is the most common and practical approach in real-world scenarios, especially for production services.
You typically:
Build the core functionality of your app.
Test it locally (unit tests, integration tests, etc.).
Confirm that it works as expected outside of a container.
Then, write a Dockerfile and containerize it.
Test the container (does it build, does it start up, are all env vars respected?).
Only then do you push it to Docker Hub or your private registry.
For Example:
Let's say you're building a payment microservice for an e-commerce system. You'd want to make sure the service can talk to your payment gateway, handle retries, fail gracefully, and return correct responses before packaging it in a container. Once it’s reliable, Dockerizing it ensures it behaves the same across staging and production.
2. Dockerize early during development
In some teams, especially those working on cloud-native platforms (like Kubernetes), you might Dockerize the app early, even while it's still being built. This is useful if:
Your team is standardizing development across machines using Docker containers.
You’re testing deployments continuously on a platform like AWS ECS or GKE.
You're writing infrastructure-as-code (IaC) alongside the app.
For Example:
A team building a GraphQL backend might scaffold the app and immediately create a Dockerfile so every developer can spin it up using docker-compose
and work in a consistent environment. They may push early test builds to a registry like GitHub Container Registry, even before the app is "done."
So what’s the best approach?
If your goal is to publish a reliable, working app on Docker Hub, it’s better to:
Build it.
Test it.
Then containerize it and push.
This keeps your registry clean and avoids wasting storage on incomplete or broken containers.
But if you're iterating fast, and container consistency is a priority, Dockerizing early can help, as long as you're comfortable handling container-related issues while still building the app.
Top comments (1)
Thanks for sharing, @deborahemeni1