Speeding up CPAN module contributions using the Docker language stack images

Using the Docker perl language stack image for speeding up contributions to CPAN modules.

Docker Inc. just released our first set of programming language images on the Docker Hub. They cover c/c++ (gcc), clojure, go (golang), hy (hylang), java, node, perl, php, python, rails, and ruby.

As I need to do some work on API testing when I come back from holidays, I thought I’d look at the Net:Docker CPAN module – and of course, there is no Perl on my Boot2Docker image, so its a perfect opportunity to see what I should do.

After forking and cloning the Git repository, I created the following initial Dockerfile:


FROM perl:5.20
MAINTAINER Sven Dowideit SvenDowideit@home.org.au

COPY . /docker-perl
WORKDIR /docker-perl

RUN cpanm --installdeps .
RUN perl Build.PL
RUN ./Build build
RUN ./Build test

It fails to build during the ‘test’ step:


$ docker build -t docker-perl .

... snip ...

Step 6 : RUN ./Build test
---> Running in 367afe04c77e
Can't open socket var/run/docker.sock: No such file or directory at /usr/local/lib/perl5/site_perl/5.20.0/LWP/Protocol/http/SocketUnixAlt.pm line 27. at t/docker-api.t line 9.

Tests were run but no plan was declared and done_testing() was not seen.

Looks like your test exited with 255 just after 1.

t/docker-api.t ....
Dubious, test returned 255 (wstat 65280, 0xff00)
All 1 subtests passed
Can't locate IO/String.pm in @INC (you may need to install the IO::String module) (@INC contains: /docker-perl/blib/arch /docker-perl/blib/lib /usr/local/lib/perl5/site_perl/5.20.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.20.0 /usr/local/lib/perl5/5.20.0/x86_64-linux /usr/local/lib/perl5/5.20.0 .) at t/docker-start.t line 3.
BEGIN failed--compilation aborted at t/docker-start.t line 3.
t/docker-start.t ..
Dubious, test returned 2 (wstat 512, 0x200)
No subtests run

Test Summary Report

t/docker-api.t (Wstat: 65280 Tests: 1 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
t/docker-start.t (Wstat: 512 Tests: 0 Failed: 0)
Non-zero exit status: 2
Parse errors: No plan found in TAP output
Files=2, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.21 cusr 0.03 csys = 0.26 CPU)
Result: FAIL
2014/09/26 16:08:19 The command [/bin/sh -c ./Build test] returned a non-zero code: 1

I’m going to have to give this Dockerfile a DOCKER_HOST (incorrectly using http://) setting (to one of my insecure plain text tcp based servers :), and add IO::String and JSON:XS to the cpanfile.

Unfortunately, because cpanm --installdeps . uses the files in the build context, this way does not use the build cache – so its slow. Its worth duplicating the contents of the cpanfile before the COPY instruction for speed.

So the working Dockerfile looks like:


FROM perl:5.20
MAINTAINER Sven Dowideit SvenDowideit@home.org.au

RUN cpanm Module::Build::Tiny
RUN cpanm Moo
#', '1.002000';
RUN cpanm JSON
RUN cpanm JSON::XS
RUN cpanm LWP::UserAgent
RUN cpanm LWP::Protocol::http::SocketUnixAlt
RUN cpanm URI
RUN cpanm AnyEvent
RUN cpanm AnyEvent::HTTP
RUN cpanm IO::String

COPY . /docker-perl
WORKDIR /docker-perl

RUN cpanm --installdeps .
RUN perl Build.PL
RUN ./Build build

This is a terrible cheat.

ENV DOCKER_HOST http://10.10.10.4:2375

RUN ./Build test
RUN ./Build install

CMD ["docker.pl", "ps"]

and then docker build -t docker-perl . results in:


bash-3.2$ docker build -t docker-perl .
Sending build context to Docker daemon 138.8 kB
Sending build context to Docker daemon
Step 0 : FROM perl:5.20
---> 4d4674548e76
Step 1 : MAINTAINER Sven Dowideit SvenDowideit@home.org.au
---> Using cache
---> 4ad0946e76aa
Step 2 : RUN cpanm Module::Build::Tiny
---> Using cache
---> f1b94d36a51c
Step 3 : RUN cpanm Moo
---> Using cache
---> 98de8c3a19a8
Step 4 : RUN cpanm JSON
---> Using cache
---> 73debd4ee367
Step 5 : RUN cpanm JSON::XS
---> Using cache
---> 89378a425f0b
Step 6 : RUN cpanm LWP::UserAgent
---> Using cache
---> 252fe329cf22
Step 7 : RUN cpanm LWP::Protocol::http::SocketUnixAlt
---> Using cache
---> a77d289faf19
Step 8 : RUN cpanm URI
---> Using cache
---> 6804b418778d
Step 9 : RUN cpanm AnyEvent
---> Using cache
---> c595f66bcf73
Step 10 : RUN cpanm AnyEvent::HTTP
---> Using cache
---> 31b25b2da3c4
Step 11 : RUN cpanm IO::String
---> Using cache
---> e54cd3d01988
Step 12 : COPY . /docker-perl
---> 4d4801209a79
Removing intermediate container c42897136186
Step 13 : WORKDIR /docker-perl
---> Running in 36575a59e465
---> 7042c67cf1b7
Removing intermediate container 36575a59e465
Step 14 : RUN cpanm --installdeps .
---> Running in c1b5cbb75c4a
--> Working on .
Configuring Net-Docker-0.002005 ... OK
<== Installed dependencies for .. Finishing.
---> 071f9caca472
Removing intermediate container c1b5cbb75c4a
Step 15 : RUN perl Build.PL
---> Running in fae9bbce142f
Creating new 'Build' script for 'Net-Docker' version '0.002005'
---> 2800182bd0ff
Removing intermediate container fae9bbce142f
Step 16 : RUN ./Build build
---> Running in a98cb6c7a808
cp lib/Net/Docker.pm blib/lib/Net/Docker.pm
cp script/docker.pl blib/script/docker.pl
---> f5ba5be85f9d
Removing intermediate container a98cb6c7a808
Step 17 : ENV DOCKER_HOST http://10.10.10.4:2375
---> Running in 1e8b3273974c
---> fffb42d69011
Removing intermediate container 1e8b3273974c
Step 18 : RUN ./Build test
---> Running in 3baacccbf17e
t/docker-api.t .... ok
t/docker-start.t .. ok
All tests successful.
Files=2, Tests=41, 5 wallclock secs ( 0.02 usr 0.02 sys + 0.26 cusr 0.06 csys = 0.36 CPU)
Result: PASS
---> f5d371cdc1fa
Removing intermediate container 3baacccbf17e
Step 19 : RUN ./Build install
---> Running in 60cd90714e02
Installing /usr/local/lib/perl5/site_perl/5.20.0/Net/Docker.pm
Installing /usr/local/bin/docker.pl
---> 62c6368a2fb0
Removing intermediate container 60cd90714e02
Step 20 : CMD ["docker.pl", "ps"]
---> Running in cb5ade11e146
---> 94984ed5756d
Removing intermediate container cb5ade11e146
Successfully built 94984ed5756d

So that I can use it:


bash-3.2$ docker run --rm -it docker-perl
ID IMAGE COMMAND CREATED STATUS PORTS
e619112eae2f 10.10.10.2:5001/sve bash 1411104597 Up 7 days ARRAY(0x2b84a48)
363ec1c45841 10.10.10.2:5001/sve bash 1411104470 Up 7 days ARRAY(0x29bae20)

You can also run the container with bash – docker run --rm -it docker-perl bash so you can do some more testing, or try out more complex examples.

In this case, the ./Build test step probably needs to happen in the docker run phase, as it needs access to a working Docker daemon – this issue will be true for modules that talk to external resources.

I’ve made a pull request for the tiny changes to get me this far. Perhaps Dockerfiles like this could be a gateway into the world of contributing quick fixes for open source libraries.

Docker, containers and simplicity.

Docker Containers emulate Operating Systems, allowing you to build, manage and run applications and services. And you copy around your application, data and configurations.

I’ve now been working for Docker Inc. for 2 months. My primary role is Enterprise Support Engineer: I’m one of the guys that your company can turn to when the going gets tough, for training, or just generally to ask questions.

In these months, I’ve been working on Boot2Docker (OSX, Windows installers), our Documentation, and generally helping users come to terms with the broad spectrum of effects that Docker has on developing, managing and thinking about software components.

I’m still trying to work out ways to explain what Docker does – this is March’s version:

Virtual machines emulate complete computers, so you setup, maintain and run a complete Operating System, and copy around complete monolithic filesystem images.
Docker Containers emulate Operating Systems, allowing you to build, manage and run applications and services. And you copy around your application, data and configurations.

This might not quite feel right, given that images are build ‘FROM’ a base image – but one thought I have, is that as that base image (and most often some local modifications) are likely to be common to your entire infrastructure, that layer will be shared for all your containers. Chances are, you didn’t build it either – Tianon did :).

Solomon keeps reminding me that Dockerfiles are like Makefiles – and in the back of my mind, I think of our application image layers as packages, thin wrappers around applications that are then orchestrated together to produce your service. The base image you choose is only there to support that, and over time I’m sure we’ll simplify those much more.

Boot2Docker dom0 and more docker orchestration magic.

I’ve modifyed boot2docker to

Auto-start an image named ‘dom0:latest’. This image then orchestrates the remainder of the system.
This personal dom0 image starts sshd and the containers I want this system to auto-run.
I also set up a `home-volume` container, which I -volumes-from mount into all my development containers.

All kicked off by the sd-card boot2docker.

So, some concrete examples for my previous Boot2Docker rules post:dom0boot2docker

I’m modifying boot2docker to

  1. if present, auto-start an image named ‘dom0:latest’. This image then orchestrates the remainder of the system.
  2. my personal dom0 image starts sshd and the containers I want this system to auto-run.
  3. Set up a `home-volume` container, which I -volumes-from mount into all my development containers.

When I do some development, testing or production, it happens in containers, the base OS is pristine, and can be trivially updated (atm, i’m using boot from USB flash and SD Card).

Similarly, the dom0 container is also a bare busybox container, cloned from the filesystem of the boot2docker image itself.. I’m not ready for my end goal of doing this to my notebook and desktop – but then, this setup is only a few days old :).

This setup uses my detect existing /var/lib/docker on HD pull request , and the dom0-rootfs, dom0-base and dom0 images, and then from there, and initial dev image.

2 customisations I’ve made to the boot2docker are persisted on the HD – /var/lib/boot2docker/etc/hostname is set to something useful to me, and the optional /var/lib/boot2docker/bootlocal.sh script starts the dom0 container at boot.

When I need a set of containers started, I can create a tiny orchestration container that can talk to the docker daemon and thus start more containers, controlling how they interact with each other and the outside world.

 

Boot2Docker for orchestration.

Imagine that you have a PXE / USB / boot setup that boots in seconds, and as its last step, starts a docker privileged image labeled ‘dom0:latest’.

Most importantly, each of these servers then would ‘docker pull’ and ‘docker run’ the services and applications it provides – without modifying the base OS, or ‘dom0’

It also gives the server farm admin the opportunity to make a customized ‘dom0:latest’ image – containing the farm’s ‘dom0:latest’ thus orchestrating the inner container configurations further.

 

I’m using boot2docker on my ‘docker-server’ (an old x61 thinkpad), and using the inbuilt harddrive to store the docker images that are run on it. I’m totally avoiding modifying the base OS, and doing all the work in containers – enabling very fast run and teardown for development and testing.

now if only ‘turnbull-net’ wasn’t totally useless for pulling images (gee, thanks Malcolm Turnbull, 1MB ADSL with constant dropouts in the middle of Brisbane is sooooooo conducive to working).

 

Docker container network portability

Rather than hardcoding network links between a service consumer and provider, Docker encourages service portability.

eg, instead of 2 containers talking directly to each other:

(consumer) --> (redis)

requiring you to restart the consumer to attach it to a different redis service, you can add ambassador containers:

(consumer) --> (redis-ambassador) --> (redis)

or

(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)

When you need to rewire your consumer to talk to a different resdis server, you can just restart the redis-ambassador container that the consumer is connected to.

This pattern also allows you to transparently move the redis server to a different docker host from the consumer.

Using the svendowideit/ambassador container, the link wiring is controlled entirely from the dockerrun parameters.

Two host Example

Start actual redis server on one Docker host

big-server $ docker run -d -name redis crosbymichael/redis

Then add an ambassador linked to the redis server, mapping a port to the outside world

big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador

On the other host, you can set up another ambassador setting environment variables for each remote port we want to proxy to the big-server

client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador

Then on the client-server host, you can use a redis client container to talk to the remote redis server, just by linking to the local redis ambassador.

client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

How it works

The following example shows what the svendowideit/ambassador container does automatically (with a tiny amount of sed)

On the docker host (192.168.1.52) that redis will run on:

# start actual redis server
$ docker run -d -name redis crosbymichael/redis

# get a redis-cli container for connection testing
$ docker pull relateiq/redis-cli

# test the redis server by talking to it directly
$ docker run -t -i -rm -link redis:redis relateiq/redis-cli
redis 172.17.0.136:6379> ping
PONG
^D

# add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh

in the redis_ambassador container, you can see the linked redis containers’s env

$ env
REDIS_PORT=tcp://172.17.0.136:6379
REDIS_PORT_6379_TCP_ADDR=172.17.0.136
REDIS_NAME=/redis_ambassador/redis
HOSTNAME=19d7adf4705e
REDIS_PORT_6379_TCP_PORT=6379
HOME=/
REDIS_PORT_6379_TCP_PROTO=tcp
container=lxc
REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/

This environment is used by the ambassador socat script to expose redis to the world (via the -p 6379:6379 port mapping)

$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379

then ping the redis server via the ambassador

Now goto a different server

$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i  -expose 6379 -name redis_ambassador docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379

and get the redis-cli image so we can talk over the ambassador bridge

$ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

The svendowideit/ambassador Dockerfile

The svendowideit/ambassador image is a small busybox image with socat built in. When you start the container, it uses a small sed script to parse out the (possibly multiple) link environment variables to set up the port forwarding. On the remote host, you need to set the variable using the -e command line option.

-expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379 will forward the local 1234port to the remote IP and port – in this case 192.168.1.52:6379.

#
#
# first you need to build the docker-ut image using ./contrib/mkimage-unittest.sh
# then
#   docker build -t SvenDowideit/ambassador .
#   docker tag SvenDowideit/ambassador ambassador
# then to run it (on the host that has the real backend on it)
#   docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 ambassador
# on the remote host, you can set up another ambassador
#    docker run -t -i -name redis_ambassador -expose 6379 sh

FROM    docker-ut
MAINTAINER      SvenDowideit@home.org.au

CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top

(this is pull request https://github.com/dotcloud/docker/pull/3038 so will eventually find its way into the Docker documentation)

Docker 0.7 is here – welcome RPM distros (and anyone else that lacks AUFS)

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

I’m curious to see the performance differences between the 3 storage backends we have now – but I need to assimilate the wonders of Linking containers for adhoc scaling first.

Try it out – I’m even more convinced that Docker containers have an interesting future 🙂

easy install of Docker.io on Debian

UPDATE: for Docker 0.6.5, the ubuntu debian package also installs on Debian. You still need to enable IPv4 forwarding as below – then re-start the docker daemon

I’ve been doing some work on Docker – learning golang, Docker internals, and just some of the command line options that I didn’t know I needed to know about.

Because I was in a hurry, I threw an old unused disk into one of my old laptops and installed ubuntu. That was enough for me to learn that I wanted to know alot more about Docker.

So, I’m back to using the loaner T530 with my 128GB SSD in it – its been running Debian since the day I got the SSD, over 2 years ago.

it turns out that on Debian testing (with the 3.10-3-amd64 kernel), its incredibly easy to run docker:

sudo apt-get install lxc wget bsdtar curl golang git aufs-tools mercurial iptables
wget --output-document=docker https://get.docker.io/builds/Linux/x86_64/docker-latest
chmod +x docker
sudo su -
#enable IPv4 forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
# set up and mount the cgroup mountpoint
echo 'none /sys/fs/cgroup cgroup defaults 0 0' | sudo tee -a /etc/fstab
mount /sys/fs/cgroup
#OK, you might need to reboot if it fails to mount?
./docker -d &

done.
from there, you can run the docker cli like normal (except that its not in your path yet).

I’m going to pull over the apt pinning installation documentation I wrote for publican the other week and re-write it (and test) for installing docker on Debian Stable, and we’ll all be much happier.