A taste of Docker and hopes for the future close

Michael Kane
In Musings, Codegent College
20th November 2014
A taste of Docker and hopes for the future

If you've stuck your head into the world of developers during the last year or so, you've probably heard about Docker. Docker is a clever set of components for building and managing linux containers (LXC). It has the same promise as LXC for application portability and platform consistency over the entire cycle of development, testing, staging and production deployments. And it also has a nice API with great container management tools, which make it fun to use.

The parts

The most basic components of the Docker system are:

  • Docker host: A service running on a machine responsible for all the dirty work of managing containers on that machine.
  • Docker client: An interface to control a docker host.
  • Docker registry: A storage service for docker images – think github.

Start playing

With any new tool, you want to hold it in your hand and press buttons or hit something to see what it's all about. Fortunately, in the world of computers, we get to do that without worrying about hurting any sentient beings (if you're worried about hurting your computer, run a throw-away virtual machine). Assuming you've installed Docker you can

Docker run --rm -it fedora:latest bash

and you should be at a bash shell in a dockerized fedora. The first time you run such a command, Docker will need to download the fedora image from the Docker hub registry – which may take a while. But from then on, firing up a new container from that image will be very quick.

Fancy seeing what haskell is like?

Docker run --rm -it haskell:latest ghci

Your favourite search engine will surface loads of great resources to help you get started with Docker.

It's worth understanding right away that Docker runs a process in the context of a container (e.g. bash / ghci in the above examples). We used the -it flags to tell Docker that we wanted to interact with that process through the terminal. When the process stops, the container stops (hence exiting bash stopped the container, which was then deleted due to the --rm flag). So if your Docker run command terminates immediately, that's due to the main process you're running in the container terminating (or being daemonized).

Developing with Docker

We've seen how easy it is to run containers from public images on the Docker hub registry, but you'll soon want to be building your own images to dockerize your apps. To do this, we create a Dockerfile. A Dockerfile specifies how to contruct an image in a step-by-step manner and if you have one for dockerizing your app it's also a great little reference to help new devs quickly see the dependencies required to get your app running. Here's an example I've used to develop a laravel app served via php-fpm + nginx:

FROM ubuntu:14.04
MAINTAINER codegent

# I can't live without this alias
RUN echo "alias 'll=ls -al'" >> /etc/bash.bashrc
RUN apt-get update

# Install supervisor to help us run multiple processes
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y supervisor

# Install nginx
RUN echo "deb http://ppa.launchpad.net/nginx/stable/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/nginx-stable.list
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C300EE8C
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y nginx

# Install our php dependencies
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-fpm
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-cli
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-mcrypt
RUN php5enmod mcrypt
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-gd
RUN php5enmod gd
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-curl
RUN php5enmod curl
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-mysql
RUN php5enmod mysql
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y php5-xdebug
RUN php5enmod xdebug

# Install composer
RUN php -r "readfile('https://getcomposer.org/installer');" | php
RUN mv composer.phar /usr/local/bin/composer

# Configure nginx
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY nginx/php.shared /etc/nginx/shared/
COPY nginx/laravel.shared /etc/nginx/shared/
COPY nginx/nginx.conf /etc/nginx/sites-enabled/default

# Configure supervisor
COPY supervisor.conf /etc/supervisor/conf.d/
RUN touch /etc/nginx/fastcgi_copy_env
COPY copy-env-to-fpm.py /usr/bin/

# Copy our application code
COPY . /code
WORKDIR /code

# Install composer dependencies
RUN composer install

# Expose nginx
EXPOSE 80

# Run supervisor in the foreground
CMD ["/usr/bin/supervisord"]

From this we can build the image and run our app (binding it to port 8000 on our local machine).

docker build -t codegent/app .
docker run --rm -it -p 127.0.0.1:8000:80 codegent/app

That is great for giving us a nicely packaged app, but during development we need to be able to edit files and we don't want to have to rebuild the image every time we do. Instead, we can mount a folder from our local machine in the container.

docker run --rm -it -p 127.0.0.1:8000:80 -v "$PWD:/code" codegent/app

We can also run other processes in the context of our container e.g. if we want a php REPL (give me IPython any day)

docker run --rm -it -v "$PWD:/code" codegent/app php artisan tinker

The commands may seem a bit unwieldy at first, but you get a feel for them eventually.

Deploying with Docker

When our code is ready we can just do a Docker build again, tagging it with our release version, and push it up to the Docker hub registry (or a private registry) ready to be pulled down and run on our testing/staging/production servers. And those lucky servers don't need to know anything about php.

docker build -t codegent/app:1.0.0
docker push codegent/app:1.0.0

Magic. Now we can just use a process manager like supervisor or maybe systemd to keep our containers running in production (or let Docker run them daemonized). Unfortunately, in the real world it's not quite that simple.

Orchestration

Most services we build are composed of various component applications. They also require some, and sometimes many, supporting services – typically a relational DB, redis and a search engine. All these parts need to be able to work together across multiple machines. That's orchestration - and it's not easy.

You need to be able fire up all the individual services that combine to provide your service and allow them to communicate in various ways. And you don't really want to have to worry about the individual machines that are running them. There is a lot of interest in this area at the moment and the major players are getting involved. Google has kubernetes, Apache has mesos, Amazon has recently announced the EC2 Container Service and CoreOS is taking off. Promisingly, the good folks behind Docker are working on libswarm to get some consistency in this area.

Ideally we could specify a service topology at a fairly high level (like a Dockerfile, this would be a great reference doc) and be able to simulate it on our development machines or deploy it to a production cluster without having to worry about the fiddly details of container linking. (At the moment, fig is a great tool for assisting with running and linking multiple containers in development).

So...

There's a bright and shiny future for Docker's use in the software lifecycle. And the present is pretty good too. Thinking about how you might dockerize your service forces you to consider seperation of concerns which is no bad thing. Once you have Dockerfiles around, they're great for getting new developers working quickly and also make a handy reference. And the ecosystem is growing rapidly. There is a learning curve and there are pain points but it feels like the community is keen to see Docker succeed at making everyone's lives easier. I know I am!