Container technologies are becoming a cornerstone of development and deployment in many software houses – including where I have my day job. Lately I’ve been creating a small web app with lots of vulnerabilities to use for security awareness training for developers (giving them target practice for typical web vulnerabilities). So I started thinking about the infrastructure: packing up the application in one or more containers – what are the security pitfalls? The plan was to look at that but as it turned out, I struggled for some time just to get the thing running in a Docker container.
First of all, the app consists of three architectural components:
- A MongoDB database. During prototyping I used a cloud version at mlab.com. That has worked flawlessly.
- A Vue 2.0 based frontend (could be anything, none of the built-in vulnerabilities are Vue specific)
- An Express backend primarily working as an API to reach the MongoDB (and a little sorting and such)
So, for packing things up, I started with taking the Express backend and wanting to add that to a container to run with Docker. In theory, the container game should work like this:
- Create your container image based on a verified image you can download from a repository, such as Docker Hub. For node applications the typical recommendation you will find in everything from Stack Overflow to personal blogs and even official doc pages from various projects is to start with a Node image from Docker Hub.
- Run your docker image using the command
docker run -p exposedIP:hostIP myimage- You should be good to go – and access the running NodeJS app at localhost:hostIP
So, when we try this, it seems to run smoothly…. until it doesn’t. The build crashes – what gives?

Building on top of Alpine, a minimal Linux distribution popular for use in containers in order to reduce the image size, we try to install some OS specific build tools required in order to install the npm package libxmljs. This package is a wrapper for the xmllib2 library for C (part of the Gnome project). Because that is what it is, it needs to set up those bindings locally for the platform it is running on, hence it needs a C compiler and a version of Python 2.7 to make this happen. To install packages on Alpine one uses the apk package manager. These packages are obviously there, so why does it fail?
Normally building a NodeJS application for production would involve putting the package.json file on the production environment and running npm install. The actual JavaScript files are not transferred (stored on the folder node_modules), they are fetched from their sources. When installing modules that need to hook into platform specific resources, this is reflected in the contents of the local node module after first installation. So if you copy your node_modules folder over to the container, this can fail. In my case it did: the app was developed on a Windows 10 computer, and we were trying to install it now on Alpine Linux in the container. The image was built with the local dev files copied to the app directory of the container image: and I had not told it what not to copy. Here’s the Dockerfile:
EDIT: use node:8 official image, not alpine, as it does not play well with glibc dependencies (such as libxml2).
FROM mhart/alpine-node:8FROM node:8
WORKDIR /app
COPY . .# Fixing dependencies for node-gyp / libxmljs
RUN apk add –no-cache make gcc g++ pythonRUN apt-get install make gcc g++ python
RUN npm install –production
EXPOSE 9000
CMD [“node”, “index.js”]
After adding the “no-cache” option on the apk command the libraries installed fine. But running the container still led to crash.

After a few cups of coffee I found the culprit: I had copied the node_modules folder from my Windows working folder. Not a good idea. So, adding a .dockerignore file before building the image fixed it. That file includes this:
node_modules
backlog.log
The backlog file is just a debug log. After doing this, and building again: Success!
Now running the image with
docker run -p 9000:9000 -d my_image_name
gives us a running container that serves the Exposed port 9000 to the localhost port 9000. I can check this in my browser by going to localhost:9000
OK, so we’re up and running with the API. Next tasks will be to set up separate containers for the frontend and possibly for the database server – and to set up proper networking between them. Then we can look at how many configuration mistakes we have made, perhaps close a few, and be ready to start attacking the application (which is the whole purpose of this small project).