This lab introduces you to Docker and how can it be used to automate the MEAN stack installation. When you finish, you will be able to:
- Create a custom Dockerfile.
- Buid and run a custom image from your own Dockerfile, and link it to another running container.
NOTE: If you don’t have experience with Linux Academy lab servers, complete the Introduction to Linux Academy course before you continue this lab. The course explains how to create a server and work with it.
- Ubuntu 14
- Docker 1.12.5
Using Linux Academy lab servers, we need to create a single instance running Ubuntu 14. After its creation, we have to log in on this instance to start working.
Docker actually has solutions for both Mac and Windows, both of which allow us to install it using an executable. In this case, we are going to install Docker inside an Ubuntu instance, so we need to install it manually.
As they explain on their documentation page, Docker requires a 64-bit Linux installation running version 3.10 or higher of the Linux kernel. To check both aspects, we can run uname command with -a flag:
You will get something like the following:
Linux franverona2.mylabserver.com 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
The kernel version in this case is 3.13.0-106-generic, and x86_64 is telling us that we are running a 64-bit Linux distribution.
Now that we are sure that we can install Docker, we need to set apt to use packages from Docker repository:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
After this, we have to add a new GPG key by downloading the key from a keyserver:
sudo apt-key adv \
--keyserver hkp://ha.pool.sks-keyservers.net:80 \
And finally, we will add the repository to our sources:
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
Now if we execute sudo apt-get update, our system will pull packages information from Docker repository too.
Docker also needs aufs storage driver to work properly. For Ubuntu 14, this driver is present on linux-image-extra-* kernel packages. Using the uname command, we can extract our Linux version in order to install the right version:
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
All is set, so we can now install Docker:
sudo apt-get update
sudo apt-get install docker-engine
Docker works as a service, so we have to initialize its daemon:
sudo service docker start
Docker should now be installed and running. To check if everything is installed correctly, we can run a container using an example image called hello-world:
sudo docker run hello-world
This command will download a test image and run it in a container. This container simply prints a message and exit. If all works properly, you should see a message like this:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
For more examples and ideas, visit:
Docker binds to a Unix socket instead of a TCP port, so we need to be root in order to execute Docker commands. To avoid using sudo every time, we can add our user to the Docker group. In this case, we are going to add our current user, but we can specify a different user if necessary:
sudo usermod -aG docker $(whoami)
Now if we log out and log in into the system, our user will belong to the Docker group. We will be able to execute Docker commands without sudo.
Setting up Docker: Dockerfile
We can create custom images using a configuration file called Dockerfile. This file will include all required commands to create a Docker image with a custom configuration. Once the image is created, we can run as many containers as we need using it.
For our MEAN installation, we are going to create two different images: web server (Node.js and code) and database (MongoDB).
The MongoDB image can be created using a Dockerfile, but we are going to use the official one by MongoDB on DockerHub. To use this image, we have to pull it from DockerHub repository using the pull command:
docker pull mongo
This will download the MongoDB image, but it won’t start it. We will see how to run this image later, but we are keeping this on hold for now. Let’s see how to create a web server image with Node.js.
To create our own web image, we are going to create a custom Dockerfile. This Dockerfile will be based on Ubuntu, and it will install all Node.js and all necessary dependencies.
Docker images (even those based on a distribution) will have the minimum packages required to run. In addition to the packages for the MEAN stack, we also need to install some others like gcc and make. It’s also a great practice to clean temporary files and folders after installing packages to keep our Docker image as small as possible.
Let’s start creating our Dockerfile. To create a file, we can use touch Dockerfile, then use your preferred text editor to start editing the file.
First of all, we have to specify our base image. In this case, our Docker image will be based on Ubuntu 14.04. We also specify our name as the maintainer of the image and a description (this has no effect on the execution, but it’s usually a good idea to add relevant tags):
# We are going to use Ubuntu 14.04
MAINTAINER Fran Verona
LABEL Description="Dockerfile for MEAN stack"
We are going to expose some ports for our image. For MEAN stack, we need ports 3000 (Node.js) and 27017 (MongoDB), but you can expose as many ports as you need (we are also going to expose port 35729 for LiveReload because we are going to use it in our example later on). To expose these ports, we add the following to our Dockerfile:
# We need to expose ports for MongoDB (27017), Node.js (3000) and LiveReload (35729)
EXPOSE 3000 27017 35729
We are now going to install a few essential packages and Git (prerequisite for MEAN stack). The essential packages are:
- sudo: to allow privilege command execution.
- curl, gcc, make, build-essentials: for installing Node.js.
- git: Git control version.
If we focus on *keeping Docker images as small as possible*, we should also remove cached packages, packages lists, and temporary files. We can do all of this with the following:
# Install prerequisites and essential packages
RUN apt-get -q update && apt-get install -y -qq \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Now that we have all of the prerequisites and essential packages prepared, we are ready to install Node.js. There is a powerful script that will add the correct repository to our repositories list to ensure the correct installation.
We get this script using curl and execute it directly on bash. We will then install Node.js using apt, then remove cached packages, packages lists, and temporary files like before.
# Install Node.js
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - \
&& apt-get install -y -q nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Node.js will also install npm as a package manager. We need it to install some MEAN dependencies, such as bower and gulp. As usual, we remove cached packages from npm:
# Install bower and gulp globally
RUN npm install --quiet -g bower gulp \
&& npm cache clean
We will build our image using this Dockerfile:
docker build -t mean .
This will build a Docker image with Git and Node.js installed, but we can go even further and create an image with our MEAN project already deployed and ready to run. Let’s take a look at an example!
Example: Mean.io on Docker
To illustrate how easy it can be to create a custom image for our MEAN project, we are going to modify our recent Dockerfile to create an image for Mean.io project. This project is a MEAN boilerplate and is really easy to use. We are going to use our recent Dockerfile, so keep it open because the following will be added at the end of this file as a continuation.
In order to take a step back and view the big picture of what we need on our image now (remember that we have Node.js installed), we can enumerate our next steps:
- Create the app directory.
- Copy app code into this directory. This copy can be a simple git clone from a repository.
- Install server dependencies using npm and client dependencies using bower.
- Start the server.
So let’s start by creating a directory for our app. Docker has an special command called WORKDIR to set a working directory in case we want to execute multiple commands in the same folder. This helps us a lot because we don’t need to change directories manually. Instead, we set a working directory and all the following commands will work inside it.
To create a directory, we use mkdir Unix command with -p flag to make parent directories if needed, and set it as a working directory using WORKDIR from Docker:
RUN mkdir -p /usr/src/app
With a fresh working directory, we can copy the code of our app. We are going to clone the official repository:
# Clone Mean.io repository
RUN git clone https://github.com/linnovate/mean.git /user/src/app
Now that we have our repo, we need to install server dependencies with npm and client dependencies with bower. For npm, we are using the –quiet flag for a less verbose output, and we also cleaned npm’s cache:
# Install server dependencies using npm
RUN npm install --quiet \
&& npm cache clean
For bower, we used the flag –config.interactive=false to set bower as a non-interactive. By default, bower may requires you to answer some questions regarding command itself (for example, it will ask if we want bower to collect usage statistics). By setting this parameter to false, we ensure that these questions will be answered automatically.
The bower install command will also give some ESUDO verbose output because it expects that we are privileged users. We can pass the –allow-root flag so bower will let us install packages:
# Install client dependencies using bower
RUN bower install --config.interactive=false --quiet --allow-root
This is similar to execute the command gulp on a terminal.
Now we are ready to start our server, but let’s review the contents of our Dockerfile:
- Our Docker image uses Ubuntu 14.04
- We exposed some ports: MongoDB (27017), Node.js (3000) and LiveReload (35729).
- We did some installations, starting with essentials packages such as sudo or make, then Node.js, and finally bower and gulp.
- We set a working directory where we cloned our project repository.
- We installed server dependencies with npm and client dependencies with bower.
- Lastly, we start the server running a command (gulp in this case).
Running our containers
As we said earlier, we are going to use two images: one for MongoDB (created by the official team and pulled from DockerHub), and one built by us from a custom Dockerfile.
To start a container using the MongoDB image, we use the run command with a few parameters:
docker run -p 27017:27017 -d --name db mongo
With this command, we are initiating a container using the MongoDB image (called mongo by default). We are using a few parameters for the run command:
- MongoDB uses port 27017 by default, so we have to expose it on our container. To expose a port, we use the -p parameter.
- Unless instructed otherwise, Docker will attach the process of running a container to our current terminal. In this case, we don’t need that, so we are going to run it as a background process using -d parameter. Feel free to not use it if you prefer to see the MongoDB output in real time.
- Docker will assign random names to our containers when we start them, but we can set our own name using –name parameter.
In conclusion, this command will start a container called db using the image mongo, expose port 27017, and run it in the background with the -d parameter.
Now, if we do a docker ps command, we will see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
825603e713ce mongo "/entrypoint.sh mo..." 1 minute ago Up 1 minute 0.0.0.0:27017->27017/tcp db
We now have a MongoDB container ready to be used. Let’s create a new container for a web server with Mean.io using our Dockerfile.
For the MongoDB container, we didn’t need to build anything because the image was hosted on Dockerhub, but in this case we have to build our custom image using build Docker command. If we are not in the folder where the Dockerfile is located, we should move to it (path/to/dockerfile is just an example, you should use your own):
Then we are ready to build the image. To keep things as organized as possible, we tag our image as mean:
docker build -t mean .
This process will take a while because our image will download an Ubuntu image (remember that our Docker image is based on Ubuntu 14.04), then install some packages and dependencies.
When the build is finished, we are ready to run the image:
docker run -p 3000:3000 -p 35729:35729 --name mean --link db:db mean
We are using a new parameter called –link where we specify that this container will be linked to the db container (our MongoDB container was named db using the –name parameter before).
Now if we open the browser and type our server IP plus Node.js default port (for example, 18.104.22.168:3000), we will see the Mean.io example project running. To get our server IP, we need to go to the LinuxAcademy site and click on Servers. We will see our server IP in the PUBLIC IP column for our server.
In this server lab, we learned how to use Docker to create containers for the MEAN stack. We also created our own image using the MEAN.io boilerplate as an example, and linked a container to a MongoDB image pulled from Dockerhub.
Docker has extensive documentation, and we can use advanced parameters such as networking to create advanced infrastructures. Feel free to learn more about Docker and you will discover how powerful it can be.