DEV Community

Cover image for How to dockerize your Node.js Express application for AWS Fargate?
Andreas Wittig
Andreas Wittig

Posted on • Originally published at cloudonaut.io

How to dockerize your Node.js Express application for AWS Fargate?

My first project with Node.js - an asynchronous event-driven JavaScript runtime, designed to build scalable network applications - was building an online trading platform in 2013. Since then, Node.js is one of my favorite technologies. I will show you how to dockerize your Node.js application based on Express - a fast, unopinionated, minimalist web framework - and run it on AWS Fargate in this blog bost. I like AWS Fargate because running containers in the cloud were never easier.

This blog post is an excerpt from our book Rapid Docker on AWS.

Read on to learn how to build a Docker image for a Node.js application.

Building the Docker image

The Dockerfile is based on the official Node.js Docker Image: node:10.16.2-stretch. Static files (folders img and css) are served by Express as well as the dynamic parts. The following details are required to understand the Dockerfile:

  • envsubst is used to generate the config file from environment variables
  • npm ci --only=production installs the dependencies declared in package.json (package-lock.json, to be more precise)
  • The Express application listens on port 8080
  • The Express application's entry point is server.js and can be started with node server.js

A simple server.js file follows. Yours likely is more complicated.

const express = require('express'); const app = express(); app.use('/css', express.static('css')); app.use('/img', express.static('img')); app.get('/health-check', (req, res, next) => { res.sendStatus(200); }); app.listen(8080, '0.0.0.0'); 

Customization Most likely, your folder structure is different. Therefore, adapt the Copy config files and Copy Node.js files section in the following Dockerfile to your needs.

FROM node:10.16.2-stretch WORKDIR /usr/src/app ENV NODE_ENV production # Install envsubst RUN apt-get update && apt-get install -y gettext COPY docker/custom-entrypoint /usr/local/bin/ RUN chmod u+x /usr/local/bin/custom-entrypoint ENTRYPOINT ["custom-entrypoint"] RUN mkdir /usr/src/app/config/ # Copy config files COPY config/*.tmp /tmp/config/ # Install Node.js dependencies COPY package*.json /usr/src/app/ RUN npm ci --only=production # Copy Node.js files COPY css /usr/src/app/css COPY img /usr/src/app/img COPY views /usr/src/app/views COPY server.js /usr/src/app/ # Expose port 8080 and start Node.js server EXPOSE 8080 CMD ["node", "server.js"] 

The custom entrypoint is used to generate the config file from environment variables with envsubst.

#!/bin/bash set -e echo "generating configuration files" FILES=/tmp/config/* for f in $FILES do c=$(basename $f .tmp) echo "... $c" envsubst < $f > /usr/src/app/config/${c} done echo "starting $@" exec "$@" 

Next, you will learn how to test your containers and application locally.

Testing locally

Use Docker Compose to run your application locally. The following docker-compose.yml file configures Docker Compose and starts two containers: Node.js and a MySQL database.

version: '3' services: nodejs: build: context: '..' dockerfile: 'docker/Dockerfile' ports: - '8080:8080' depends_on: - mysql environment: DATABASE_HOST: mysql DATABASE_NAME: app DATABASE_USER: app DATABASE_PASSWORD: secret mysql: image: 'mysql:5.6' command: '--default-authentication-plugin=mysql_native_password' ports: - '3306:3306' environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: app MYSQL_USER: app MYSQL_PASSWORD: secret 

The following command starts the application:

docker-compose -f docker/docker-compose.yml up --build 

Magically, Docker Compose will spin up three containers: NGINX, Django, and MySQL. Point your browser to http://localhost:8080 to check that your web application is up and running. The log files of all containers will show up in your terminal, which simplifies debugging a lot.

After you have verified that your application is working correctly, cancel the running docker-compose process by pressing CTRL + C, and tear down the containers:

docker-compose -f docker/docker-compose.yml down 

Deploying on AWS

You are now ready to deploy your application on AWS.

(1) Build Docker image:

docker build -t nodejs-express:latest -f docker/Dockerfile . 

(2) Create ECR repository:

aws ecr create-repository --repository-name nodejs-express \ --query 'repository.repositoryUri' --output text 

(3) Login to Docker registry (ECR):

$(aws ecr get-login --no-include-email) 

(4) Tag Docker image:

docker tag nodejs-express:latest \ 111111111111.dkr.ecr.eu-west-1.amazonaws.com/\ nodejs-express:latest 

(5) Push Docker image:

docker push \ 111111111111.dkr.ecr.eu-west-1.amazonaws.com/\ nodejs-express:latest 

There is only one step missing: you need to spin up the cloud infrastructure.

  1. Use our Free Templates for AWS CloudFormation.
  2. Use our cfn-modules.
  3. Use the blueprint from our book Rapid Docker on AWS.

Top comments (1)

Collapse
 
jesusz0r profile image
Jesus Mendoza

Hey Andreas. Thanks for the post!
Do you know how to manage .env files safely when deploying to ECS?

I deployed my application. Passed --env MONGO_URI=asdasd and is not working. I also defined the MONGO_URI as an environment variable in the Task Definition and is not working either. The only solution I find is to create an .env file and commit it with my code but I'm afraid I'll reveal my AWS keys