Using Docker as a basic Node developing environment
Docker: the whys
During my time at Weezevent, we were using Docker environments to manage our API servers. Last week, the new project I’ve been talking to you about was a cool opportunity for me to dust up this Docker knowledge I acquired and to create my own Docker environment. Now, for those who only heard about it, what is Docker? Docker, according to its Wikipedia article, automates the deployment of applications inside software containers.
Let me give you a use case: you have this web server project, that you built in NodeJS with a MySQL database and another InfluxBD database to collect user data. You first build it and configure it on your primary machine for development purposes - so it works great on your own machine. Once you have a working project, you decide to upload it on your distant machine to make it publicly accessible. But that means that you have to install and configure a MySQL server and an InfluxDB database on your own server too - which can sometimes take 5 minutes, but can easily take you an hour if you run into an unexpected problem… And imagine - after the configuration, your app still won’t run, and after a long debugging time you just realize it’s because you were running an old version of NodeJS on your server that doesn’t have some APIs you call!
This situation can get even worse if you involve more developers in the equation: suppose one of your co-workers has an old, legacy version of MySQL that’s not compatible with the queries you wrote, but that he can’t update because is a dependency of one of his own projects…
Using Docker as a back-end developer easily allows you to contain development environments: you can, with a few Docker configuration files, declare what software you need and give it a few configuration scripts that will execute everytime the Docker image builds, so everyone gets the same environment that will not conflict with their own personal setup.
- It makes setting up your software easier and more convenient to develop for - it will also encourage people to contribute to your open source projects, since they don’t have to follow a long list of instruction steps to set up their environment and start developing!
- It also allows you to minimize the configuration files and scripts on your app. I’ll show you how Docker Compose allowed me to just wire up the different components (database, web server…) and keep a unique set database parameters, so you don’t have to write your own config files before deploying!
- And, as a last argument, using Docker is super easy. Even your fellow developers who don’t know anything about it will just have to follow a few commands to get started.
Setting up Docker for Node
Docker revolves around the concept of images: every image is a more-or-less empty Unix environment that contains a piece of software. Since we’re talking about containing… You guessed it right: we’re going to create our own image to contain our application. But we are going to do so by using other images as basis.
First step: setting up your own Dockerfile. A Dockerfile is a command file that specifies what steps Docker must take to build a working image for your app. Let’s see in practice how it looks like:
FROM node:4-onbuild RUN mkdir /code ADD . /code WORKDIR /code RUN npm install RUN npm install -g supervisor
What does our Dockerfile? The 1st line says “we’re going to create our image by import the
node:4-onbuild image”. Docker is connected to an image marketplace, the Docker Hub, where most systems developers publish their own Docker images. These usually are empty systems with only what you need. You can actually go on the Docker Hub and publish your own images, if you build a system that you want to share with other developers or admins. Since I imported here a Node image, you can check out the Node image page: it shows all the different Node versions you can download as an image. The reason we’re using a Node image is that this way, we know for sure
npm are preinstalled. Plus, we know exactly the version that’ll be installed - version 4!
The 2nd and 3rd line say “create a repertory to store my code, and put all my code in this repository”. We’ll see later how we can change that from “copying my code” to “linking my folder to my container”.
The 4th, 5th and 6th lines are pretty much clear for any Node developer out there. We’ll use
supervisor to manage our Node process once it’s ready to go, so we’re going to install it too globally on our system.
Once you’re done writing your Dockerfile, you can build it using
docker build, and using CLI parameters to give it a shining new name. Ta-da, you’ve built your first Docker image, and you’re now containing your code within a virtual machine! Once built, you can run your code with
Wiring images: Docker Compose
That’s cool, but I’m going to need PostgreSQL/MySQL/MariaDB/MongoDB/… for my app. You haven’t talked about this yet!
Indeed! If you’ve been following correctly, your first thought might be adding the dependencies from the Dockerfile, by making the container run
apt-get install mysql-server. But that’s kind of heavy: you’ll also have to run
apt-get update, plus you’ll have to configure all of your database within the Dockerfile… Although that works, Docker actually has a built-in tool to separate software in different containers: Docker Compose. Using a
docker-compose.yml configuration file, you’re going to be able to tell Docker “Okay, now that I’ve built my image, I want it to interact with this PostgreSQL image, and this Redis image”. Your config for these other images will be stored in your Docker Compose configuration. If you want your PostgreSQL image to name the database in a certain way, like configured in your NodeJS files, you just pass it on as an environment variable in your
This is what my
docker-compose.yml file looked like for a NodeJS/PostgreSQL/Redis project:
version: '2' services: web: build: . command: supervisor index.js ports: - "5000:5000" volumes: - .:/code links: - redis - db redis: image: redis db: image: postgres:latest environment: - POSTGRES_DB=mynewappdatabase
docker-compose.yml configuration file, version 2 as specified, tells the Docker engine to build and run the specified Docker images - including the one from the folder we’re in, that we’ve called
web, and other images that we’ll import from the Hub,
postgres:latest. It will then execute the command
supervisor index.js once done. It also tells the Docker container to expose the port 5000, which is the one I’m using for development purposes. We’re also mounting all our files as a volume, with
volumes : so if you modify your code, it will instantly reflect in your container - and if you modify a file within your container, it will be applied in your file system.
A sidenote on supervisor: Supervisor is a npm package that launches your Node app for you and manages them. Say your Node app crashes: Supervisor restarts it. You modify your Node app’s sources? Supervisor will kill and restart the app for you. It’s really useful, both in development and in production, and acts as a
python manage.py runserver for the Django maniac that I am. Plus, it’s super easy to use: just install it with
npm install -g supervisor, and then you only have to launch your Node apps with
supervisor instead of
The interesting part comes with the
links part. This part lets the Docker engine know that our Webapp wants to interact with the Postgres and the Redis images we’re importing - so Docker puts all the container in the same network, so
web can interact with
db (which contains the Postgres image). How does it actually work, inside the Node app ? It’s actually really simple.
var postgres = require('pg-promise')() // Connection to PG database var db = postgres('postgres://postgres:@db:5432/mynewappdatabase')
From my JS code, the database is accessible as the
db host! As a bonus: since I defined the default database name in my
docker-compose.yml file, I even know the name of the database I want to connect to. Now, to launch it, you just have to enter
docker-compose up, which will automatically rebuild the images if necessary, and launch the specified commands!
To connect to your server :
- If you’re on a Linux machine or on the latest Docker betas, Docker doesn’t use a VM to run the images - just connect to
- If you’re using the Docker Toolbox for macOS or Windows, Docker uses a VirtualBox VM to run your images. You have to look up for this machine’s IP to connect to it: run
docker-machine ip [machine_name]to get a machine’s IP, given that you can get the list of your machines with
docker-machine ls. Edit: Docker has now released a native version for macOS and Windows available here. It uses OS native virtualization tools to emulate the Linux machines. If you use native Docker, you can use
localhostjust as Linux users.
More about Docker
Docker is a fast-evolving tool, with a huge community revolving around it - so there are a lot of upcoming functions to help you as a developer coming around.
- One great place to start exploring is the Docker Hub, where you can find all sorts of images already created by admins, developers and corporations.
- Once you’re confortable with the way Docker works, you can read more about the new features that are coming for Docker: one of the latest additions to Docker is Docker Swarm, a tool to run a cluster of machines running the same image, allowing for example better load balancing. The official documentation is a great resource, as well as this blogpost if you read French.
- Keep in mind, however, that part of the sysadmin/devops community isn’t comfortable with using Docker in production ; for security reasons, or for resource consumption reasons. Before you do, you should read articles advocate for and against to make up your own mind. You can read this article or this one (in French) for more info on the troubles of running Docker in production.