Docker container network portability

Rather than hardcoding network links between a service consumer and provider, Docker encourages service portability.

eg, instead of 2 containers talking directly to each other:

(consumer) --> (redis)

requiring you to restart the consumer to attach it to a different redis service, you can add ambassador containers:

(consumer) --> (redis-ambassador) --> (redis)

or

(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)

When you need to rewire your consumer to talk to a different resdis server, you can just restart the redis-ambassador container that the consumer is connected to.

This pattern also allows you to transparently move the redis server to a different docker host from the consumer.

Using the svendowideit/ambassador container, the link wiring is controlled entirely from the dockerrun parameters.

Two host Example

Start actual redis server on one Docker host

big-server $ docker run -d -name redis crosbymichael/redis

Then add an ambassador linked to the redis server, mapping a port to the outside world

big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador

On the other host, you can set up another ambassador setting environment variables for each remote port we want to proxy to the big-server

client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador

Then on the client-server host, you can use a redis client container to talk to the remote redis server, just by linking to the local redis ambassador.

client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

How it works

The following example shows what the svendowideit/ambassador container does automatically (with a tiny amount of sed)

On the docker host (192.168.1.52) that redis will run on:

# start actual redis server
$ docker run -d -name redis crosbymichael/redis

# get a redis-cli container for connection testing
$ docker pull relateiq/redis-cli

# test the redis server by talking to it directly
$ docker run -t -i -rm -link redis:redis relateiq/redis-cli
redis 172.17.0.136:6379> ping
PONG
^D

# add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh

in the redis_ambassador container, you can see the linked redis containers’s env

$ env
REDIS_PORT=tcp://172.17.0.136:6379
REDIS_PORT_6379_TCP_ADDR=172.17.0.136
REDIS_NAME=/redis_ambassador/redis
HOSTNAME=19d7adf4705e
REDIS_PORT_6379_TCP_PORT=6379
HOME=/
REDIS_PORT_6379_TCP_PROTO=tcp
container=lxc
REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/

This environment is used by the ambassador socat script to expose redis to the world (via the -p 6379:6379 port mapping)

$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379

then ping the redis server via the ambassador

Now goto a different server

$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i  -expose 6379 -name redis_ambassador docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379

and get the redis-cli image so we can talk over the ambassador bridge

$ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

The svendowideit/ambassador Dockerfile

The svendowideit/ambassador image is a small busybox image with socat built in. When you start the container, it uses a small sed script to parse out the (possibly multiple) link environment variables to set up the port forwarding. On the remote host, you need to set the variable using the -e command line option.

-expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379 will forward the local 1234port to the remote IP and port – in this case 192.168.1.52:6379.

#
#
# first you need to build the docker-ut image using ./contrib/mkimage-unittest.sh
# then
#   docker build -t SvenDowideit/ambassador .
#   docker tag SvenDowideit/ambassador ambassador
# then to run it (on the host that has the real backend on it)
#   docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 ambassador
# on the remote host, you can set up another ambassador
#    docker run -t -i -name redis_ambassador -expose 6379 sh

FROM    docker-ut
MAINTAINER      SvenDowideit@home.org.au

CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top

(this is pull request https://github.com/dotcloud/docker/pull/3038 so will eventually find its way into the Docker documentation)

Docker 0.7 is here – welcome RPM distros (and anyone else that lacks AUFS)

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

I’m curious to see the performance differences between the 3 storage backends we have now – but I need to assimilate the wonders of Linking containers for adhoc scaling first.

Try it out – I’m even more convinced that Docker containers have an interesting future 🙂

I wonder if Docker can replace Puppet.

I’m curious to see how hard it would be to push out Docker versioned configuration changesets over ssh to ‘anywhere’, with some kind of idempotency via system ‘tags’.

I’ve finally spent a little time playing with Docker, and to be honest, the really simple

here’s a list of commands that get run to set up the image

feels awesome.

to test it out, I wrote the simplest steps I could think of to create a working foswiki installation into a Dockerfile:

FROM ubuntu
MAINTAINER    Sven Dowideit <svendowideit@home.org.au>

RUN echo deb http://fosiki.com/Foswiki_debian/ stable main contrib > /etc/apt/sources.
list.d/fosiki.list
RUN echo deb http://archive.ubuntu.com/ubuntu precise main restricted universe multive
rse >> /etc/apt/sources.list
RUN gpg –keyserver the.earth.li –recv-keys 379393E0AAEE96F6
RUN apt-key add //.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get install -y foswiki

#create the tmp dir
RUN mkdir /var/lib/foswiki/working/tmp
RUN chmod 777 /var/lib/foswiki/working/tmp
#TODO: randomise the admin pwd..
RUN htpasswd -cb /var/lib/foswiki/data/.htpasswd admin admin
RUN mv /etc/foswiki/LocalSite.cfg /etc/foswiki/LocalSite.cfg.orig
RUN grep –invert-match {Password} /etc/foswiki/LocalSite.cfg.orig > /etc/foswiki/Loca
lSite.cfg
RUN chown www-data:www-data /etc/foswiki/LocalSite.cfg

RUN bash -c ‘echo “/usr/sbin/apachectl start” >> /.bashrc’
RUN bash -c ‘echo “echo foswiki configure admin user password is ‘admin'” >> /.bashrc’

EXPOSE 80

and then I can create the image with a simple:

docker build -t svendowideit/ubuntu-foswiki .

and run that image by calling:

docker run -t -i -p 8888:80 svendowideit/ubuntu-foswiki /bin/bash

Which (assuming that port 8888 is unused on my host computer) means I can do some testing by pointing my web client to http://localhost:8888/foswiki

When I exit the bash shell, which allows me to debug what is happening, everything is shutdown, and all changes are lost. If I make changes, I can commit them, but at this point, I prefer to make a new Dockerfile.

The interesting thing is that Docker seems to create an image tag for every command, so if I make add some RUN lines, or make changes, it doesn’t need to re-do steps that it has done before…. which sounds to me just like Rex, Puppet, Ansible etc – but more re-useable.

And so, I’m curious to see how hard it would be to push out Docker versioned configuration changesets over ssh to ‘anywhere’, with some kind of impotency via system ‘tags’.

 

PS, the docker image is available from https://index.docker.io/u/svendowideit/ubuntu-foswiki/ , and uses my debian packages, so you should install new extensions using apt-get install

Dual screen Chromebox as a remote terminal to SaaS Virtual Machines.

My work desktop runs almost nothing: all my applications are served by an in-house ‘cloud’ of servers and virtual servers that live downstairs.

The GUI applications – email, irc, skype, development environments all get persistently and transparently pushed to which-ever ‘display’ i’m using – on the sofa, I use my x61 tablet, at my desk, I was using a dual screen mac-mini that i detested, and now, I’m beginning to set up a ChromeBox series 5 to do the same thing.

(yes, in developer mode, and with the root file system made read-write)

There are some developer type setup-tweaks I’ve had to make – most notably to edit the /etc/X11/xorg.conf to increase the Virtual desktop size to accommodate the second screen.

Section "Screen"
    Identifier "DefaultScreen"
    Monitor    "DefaultMonitor"
    Device     "DefaultDevice"
    #ADDED by Sven for three headed chromeos
    SubSection "Display"
        Virtual 6000 2000
    EndSubSection
EndSection

and then I have a simple script to use xrandr and then ssh to X-Forward my 4 main xpra sessions to it.

chronos@localhost ~ $ more setup.sh #!/bin/sh
#http://cr-48.wikispaces.com/Disable+Power+Management
sudo initctl stop powerm
xrandr --output HDMI2 --right-of HDMI1 --rotate left ssh -Y sven@quiet ./attach_dev.sh

where attach_dev.sh looks like:

sven@quiet:~$ more attach_dev.sh 
#!/bin/sh

xpra attach :10 &
xpra attach :11 &
xpra attach :12 &
xpra attach :13 &

xfwm4

yup, I run a second X11 Window manager to allow me to re-position the applications that are X-Forwarded.

using xfwm4 means that I can roll up and down the chromeos browser windows – which are separate from the other X apps, and I can move the mouse to the other screen via a tiny hole in the chromeos windowmanager – there’s a gap down where the chromeos toolbar is.

this is really after only a few hours playing, so I’m sure there are many improvements that can be made.

 

Foswiki 1.1.5 released – rpms, debs and usbstick ready

Foswiki 1.1.5 released – rpms, debs and usbstick ready

George has been leading the charge to a major bug fixing release of foswiki – we’ve resolved over 120 issues, and worked hard to improve security – dealing with some interesting cross site scripting issues found by ‘SonyStyles’, and then pushing on to harden the registration process to deal with spammers.

foswiki’s password system can now migrate your user’s password store to more modern encryption methods – the default that we shipped with Twiki can thus move from crypt to md5-apache.

4 days after the release, the installation and maintenance options for 1.1.5 have improved too:

  1. my yum package repository (extensions too)
  2. my debian package repository (extensions too)
  3. my Foswiki on a USB stick for Windows
  4. Oliver’s VirtualMachine

More Apache conf magic, this time for foswiki

More Apache conf magic, this time for foswiki

Last month, I’ve needed to diagnose 2 issues with a foswiki installation.

The first is the constant issue of pinpointing performance problems, the second with session persistence not persisting.

Both of these needed some form of logging to track when and to whom they were happening, so I figured the easiest thing to do was to use Apache to log what I needed.

Performance monitoring

Apache can log ‘The time taken to serve the request, in microseconds.’, and it can log HTTP response header values. So I added a little code to the foswiki installation to output a HiRes timer of how long it took to render the request, and set up my log as:

#add a 'performance' log
LogFormat  "%h %l %{SCRIPT_URI}e%q %u %t %>s %Ts (%DuS) foswiki: %{X-Foswiki-Monitor-Rendertime}o " performance
CustomLog logs/performance_log performance

Using this log, we can compare configuration changes and loads vs both perl execution times and (it seems) some measure of communication times.

Session Cookie logging

In this foswiki’s case, there was a mix of http/https, ipv4/ipv6, Client SSL Certificates and hotfixed RewriteRules that I was suspicious of. So given that it worked for my connections more often than not, I wondered if there were conflicts of session cookies between ssl and non-ssl, or something more insidious.

So I started logging session cookies (guid’s)

#add a 'session cookies and strikeone' log
LogFormat  "%h %{HTTP_HOST}e %>s \"%r\" %{pid}P \"%{SSL_CLIENT_S_DN_CN}e\" %{FOSWIKISID}C %{SFOSWIKISID}C %{FOSWIKISTRIKEONE}C " session
CustomLog logs/session_log session

In both cases, these log files let me pinpoint what the problem was not – and then have that inspiration that fixed the worst of it.

 

X-Foswiki-Monitor-renderTime patch

I’ll either add this to foswiki 1.2.0, or make a plugin for it, but if you want to see how long things take to render, apply this patch:

NOTE: you will need to install the Time::HiRes CPAN library

diff --git a/core/lib/Foswiki.pm b/core/lib/Foswiki.pm
index 4771f71..d26bd80 100644
--- a/core/lib/Foswiki.pm
+++ b/core/lib/Foswiki.pm
@@ -838,6 +838,9 @@ BOGUS
         }
     }

+    $this->{response}->pushHeader( 'X-Foswiki-Monitor-renderTime',
+        $this->{request}->getTime() );
+        
     $this->generateHTTPHeaders( $pageType, $contentType, $text, $cachedPage );

     # SMELL: null operation. the http headers are written out
diff --git a/core/lib/Foswiki/Request.pm b/core/lib/Foswiki/Request.pm
index 2ce2e15..a06af69 100644
--- a/core/lib/Foswiki/Request.pm
+++ b/core/lib/Foswiki/Request.pm
@@ -36,6 +36,14 @@ use Assert;
 use Error    ();
 use IO::File ();
 use CGI::Util qw(rearrange);
+use Time::HiRes ();
+
+sub getTime {
+    my $this = shift;
+    my $endTime = [Time::HiRes::gettimeofday];
+    my $timeDiff = Time::HiRes::tv_interval( $this->{start_time}, $endTime );
+    return $timeDiff;
+}

 =begin TML

@@ -69,6 +77,7 @@ sub new {
         remote_user    => undef,
         secure         => 0,
         server_port    => undef,
+        start_time     => [Time::HiRes::gettimeofday],
         uploads        => {},
         uri            => '',
     };

 

 

 

Time::HiRes

Centos yum install foswiki and Debian apt-get install foswiki

Thats right, on Redhat Enterprise and Centos, its now just as easy to install foswiki and its ~300 plugins as it is to do so on Debian.

That’s right, on Redhat Enterprise and Centos, it’s now just as easy to install foswiki and its ~300 plugins as it is on Debian.

This means that you can now manage your Enterprise Foswiki using the same package management tools as the rest of the operating system.

For example, I just installed a demo system with:

yum install foswiki-jhotdrawplugin foswiki-ldapcontrib foswiki-newuserplugin foswiki-glueplugin foswiki-ldapngplugin foswiki-calendarplugin foswiki-edittableplugin foswiki-interwikiplugin foswiki-renderlistplugin foswiki-smiliesplugin foswiki-tableplugin foswiki-directedgraphplugin

and when yum finished, I browse to http://server/foswiki/ and its up and running.

These packages are built by a script that downloads the latest packages from http://foswiki.org/Extensions, generates an EPM manifest and then builds rpm packages – every night. I have not yet tested them with Redhat Enterprise 6 or fedora

 

To try it out, you’ll need to add the EPEL repository, and then this one to your yum config:

 

su
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
cd /etc/yum.repo.d/
wget http://fosiki.com/Foswiki_rpms/foswiki.repo

and then run

yum makecache

 

To see what foswiki extensions are available, run
yum search foswiki

To install foswiki, and some plugins:

yum install foswiki foswiki-workflowplugin foswiki-jscalendarcontrib foswiki-ldapcontrib

then browse to http://servername/foswiki/bin/configure to enable the plugin and configure settings.

 

Last few months foswiki

It seems that I’ve been busy with family things, so have forgotten to blog.

Before we left for Zurich in August, I delivered a foswiki that was an amalgam of TWiki, MediaWiki and Sharepoint Wiki topics.

Sharepoint was the most surprising – technically, its got so much potential, but so little support for endusers. It has federated search, data types, and views, but pretty much all of it needs to be written by someone as a compiled component, and installed on the server.

Seems to me there’s an oportunity for someone to build a compatibility layer allowing users to write applications as in TWiki and Foswiki.

After getting settled in, I was persuaded to start work on foswiki store2 for foswiki 2.0 – bringing together all of the learning and performance work from my Database and MongoDB backends – its happening in my github repository at the moment, as its going to take a month or 2 before its passes all the tests.

And last week, I was distracted by Ward Cunningham’s Federated Wiki – we’ll see how I get myself back on foswiki track – all while looking after the 2 girls (just turned 2.5) while we’re in Zurich.

The foswiki General assembly and FoswikiCamp is probably going to be in CERN, on the weekend of November 19 – hope to see everyone there!

fastest foswiki (and TWiki) ever – MongoDB for foswiki milestone 4

When the foswiki on MongoDB project started, this query would take 5.4 seconds to provide the html to the client (pure CGI), now it takes 0.7seconds (with mod_fcgid).

Thats a speedup of over 7 times.

I realised today that I’ve not written up a progress post for foswiki on MongoDB for a bit – and so did a few benchmarks again.

The benchmarks given (at http://foswiki.org/Development/MongoDBPlugin ) are for a structured query on a DataForm based web containing 25,000 topics, and are run on a desktop system running a 1.8GHz core2duo with 2G RAM.

When the foswiki on MongoDB project started, this query would take 5.4 seconds to provide the html to the client (pure CGI), now it takes 0.7seconds (with mod_fcgid).

That’s a speed-up of over 7 times.

Many other large web queries, like a WebIndex on a large web couldn’t even complete before, and now run in a usable fashion.

This milestone we’re separating out each web into its own database, and I’ll be adding in the topic revision information to the database too – that way it won’t matter if you have 10,000 webs, or 1,000,000 – the speed should be essentially constant (so long as you have the server resources to match your loads).

 

If noSQL isn’t suitable, and you would like to see a similar back-end developed using an SQL engine – contact me – WikiRing and fosiki are looking for interested companies with foswiki (and TWiki) scaling issues – without real life testing, examples and stakeholders, its extremely difficult find the many corner cases that our complex engine can allow.