Another talk I gave at Linux.conf.au, was about making slim containers (youtube) – ones that contain only the barest essentials needed to run an application.
And I thought I’d do it from source, as most “Built from source” images also contain the tools used to build the software.
1. Make the Docker base image you’re going to use to build the software
In January 2015, the main base images and their sizes looked like:
scratch latest 511136ea3c5a 19 months ago 0 B
busybox latest 4986bf8c1536 10 days ago 2.433 MB
debian 7.7 479215127fa7 10 days ago 85.1 MB
ubuntu 15.04 b12dbb6f7084 10 days ago 117.2 MB
centos centos7 acc1b23376ec 10 days ago 224 MB
fedora 21 834629358fe2 10 days ago 250.2 MB
crux 3.1 7a73a3cc03b3 10 days ago 313.5 MB
I’ll pick Debian, as I know it, and it has the fewest restrictions on what contents you’re permitted to redistribute (and because bootstrapping busybox would be an amazing talk on its own).
Because I’m experimenting, I’m starting by seeing how small I can make a new Debian base image – starting with:
FROM debian:7.7
RUN rm -r /usr/share/doc /usr/share/doc-base \
/usr/share/man /usr/share/locale /usr/share/zoneinfo
CMD ["/bin/sh"]
Then make a new single layer (squashed image) by running docker export
and docker import
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
debian 7.7 479215127fa7 10 days ago 85.1 MB
our/debian:jessie latest cba1d00c3dc0 1 seconds ago 46.6 MB
Ok, not quite half, but you get the idea.
Its well worth continuing this exercise using things like dpkg —get-selections
to remove anything else you won’t need.
Importantly, once you’ve made your smaller base image, you should use it consistently for ALL the containers you use. This means that whenever there are important security fixes, that base image will be downloadable as quickly as possible – and all your related images can be restarted quickly.
This also means that you do NOT want to squish your images to one or two layers, but rather into some logical set of layers that match your deployment update risks – a common root base, and then layers based on common infrastructure, and lastly application and customisation layers.
2. Build static binaries – or not
Building a static binary of your application (in typical Go
style) makes some things simpler – but in the end, I’m not really convinced it makes a useful difference.
But in my talk, I did it anyway.
Make a Dockerfile that installs all the tools needed, builds nginx, and then output’s a tar file that is a new build context for another Docker image (and contains the libraries ldd tells us we need):
cat Dockerfile.build-static-nginx | docker build -t build-nginx.static -
docker run --rm build-nginx.static cat /opt/nginx.tar > nginx.tar
cat nginx.tar | docker import - micronginx
docker run --rm -it -p 80:80 micronginx /opt/nginx/sbin/nginx -g "daemon off;"
nginx: [emerg] getpwnam("nobody") failed (2: No such file or directory)
oh. I need more than just libraries?
3. Use inotify to find out what files nginx actually needs!
Use the same image, but start it with Bash – use that to install and run inotify, and then use docker exec
to start nginx:
docker run --rm build-nginx.static bash
$ apt-get install -yq inotify-tools iwatch
# inotifywait -rm /etc /lib /usr/lib /var
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_files-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_nis-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE ld-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libc-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnsl-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_compat-2.13.so
/etc/ OPEN passwd
/etc/ OPEN group
/etc/ ACCESS passwd
/etc/ ACCESS group
/etc/ CLOSE_NOWRITE,CLOSE group
/etc/ CLOSE_NOWRITE,CLOSE passwd
/etc/ OPEN localtime
/etc/ ACCESS localtime
/etc/ CLOSE_NOWRITE,CLOSE localtime
Perhaps it shouldn’t be too surprising that nginx expects to rifle through your user password files when it starts 🙁
4. Generate a new minimal Dockerfile and tar file Docker build context, and pass that to a new `docker build`
The trick is that the build container Dockerfile can generate the minimal Dockerfile and tar context, which can then be used to build a new minimal Docker image.
The excerpt from the Dockerfile that does it looks like:
# Add a Dockerfile to the tar file
RUN echo "FROM busybox" > /Dockerfile \
&& echo "ADD * /" >> /Dockerfile \
&& echo "EXPOSE 80 443" >> /Dockerfile \
&& echo 'CMD ["/opt/nginx/sbin/nginx", "-g", "daemon off;"]' >> /Dockerfile
RUN tar cf /opt/nginx.tar \
/Dockerfile \
/opt/nginx \
/etc/passwd /etc/group /etc/localtime /etc/nsswitch.conf /etc/ld.so.cache \
/lib/x86_64-linux-gnu
This tar file can then be passed on using
cat nginx.tar | docker build -t busyboxnginx .
Result
Comparing the sizes, our build container is about 1.4GB, the Official nginx image about 100MB, and our minimal nginx container, 21MB to 24MB – depending if we add busybox to it or not:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
micronginx latest 52ec332b65fc 53 seconds ago 21.13 MB
nginxbusybox latest 80a526b043fd About a minute ago 23.56 MB
build-nginx.static latest 4ecdd6aabaee About a minute ago 1.392 GB
nginx latest 1822529acbbf 8 days ago 91.75 MB
Its interesting to remember that we rely heavily on I know this, its a UNIX system
– application services can have all sorts of hidden assumptions that won’t be revealed without putting them into more constrained environments.
In the same way that we don’t ship the VM / filesystem of our build server, you should not be shipping the container you’re building from source.
This analysis doesn’t try to restrict nginx to only opening certain network ports, devices, or IPC mechanisms – so there’s more to be done…