Personal computing is stuck in the 90’s

I’m amazed at how much work we’ve done to improve the building, updating, maintaining and scaling of massive service oriented computing

But obviously, I’m still frustrated.

I have around 5 different display+keyboard interfaces that I use daily, but it still seems to matter which one I’m using. Which means at least once a day, I have to think about which system I was working on when I tried something out (or started a Change request on).

I don’t want to go back to client-server, where the data, code, execution etc all runs on “the big server” (though I have one) – I’m not always connected to a fat enough pipe, and interactive video things then become weird – I’ve tried running a zoom session (or video) on a remote system – it’s ok, until something happens.

I’m also not satisfied with just using each app’s sync for some reason.

Really, why can’t I just have my data and programs be where it makes sense, and to not think about that at all (pretty much the opposite requirement from Kubernetes – which is fundamentally about making sure that data and program is running within spec, inside a specific cluster boundary).

So what does this look like?

First up, I need a secure identity for each computer, for each data-set, program, me, and anyone else I want to share those things with.

Then, I need a secure network system, in which I can use those identities to ensure data-sets can only by accessed by computers, people, and programs that I feel they are safe on.

Interactive programs need to be mobile – if it’s an interactive workload, then chances are, I want the UI to run on whatever screen I just moved to – and the data should come too.

Batch programs, need to be able to run – preferably without impacting my interactive computer – but without me needing to notice that they’re offloaded to elsewhere.

Nathan and I were working on one end of this problem – leveraging the Solana blockchain for the identity component, and as a persistent and auditable store of what nodes were connected together (and then what application ports might be shared).

BUT – we need more – very specifically, a network filesystem that creates a virtual filesystem that auto-caches the data needed by any node.

And, we need a way to differentiate interactive workloads (browser, vscode, video and audio editing) from true batch jobs (code compilation, data analysis, automation), and for our shell (GUI or TUI or workflow) to automatically choose the right scheduler to decide where and how to run it.

>>> OK, the above paragraph makes me thing that webUI was right – and what might not be working for me, is how all my browser use is represented as a single program with multiple windows and huge numbers of tabs. <<<< MAYBE, everything is a batch job – some just desire one of their data sources (keyboard/mouse/webcam/microphone/speaker) to have a specific locality to the user.

Bryan and I were talking, and riffing on a similar idea wrt the install/boot cycle for computers – if we have a global network filesystem, we should be able to turn on a computer, it chooses from a list of OS/UI types streams that OS from the network, and then as it works out where it is and who is using it, it pulls down the interactive (and batch) data and programs it needs. The disk is then only used to speed up future uses of those things – output data is written primarily to the network filesystem.

I _THINK_ this means that an internet distributed job submission system like Bacalhau really needs an interactive shell that runs on a virtual overlay filesystem that _looks_ familiar. Good luck with the hard bits – is docker run -p 80:80 nginx a batch job with a file based output? and how is it differentiable from docker run run-weather-prediction ? Do we need a crowdsourced library of npm run build vs npm run dev – and oh, those are not codebase specific aliases set in each package.json :(.

Funniest thing is – the user facing endpoint, is still the same as the idea we were tossing around back in the 1990’s (Hi! to the the peeps that came to our place to talk about a ‘portable disk+cpu’ that could be plugged into a luggable, a desktop, or a network 🙂 ) – and we still don’t have that – every computer (phone, IoT device) is an island in a sea of variable quality networking.

I guess I’m going to continue working on components of the dream 🙂

2021: the year to move from Docker to Kubernetes

Yup, I’m one of those people that delayed moving my developers, testing, CI/CD and Production off Docker for as long as I could. And now, we’ve gotten all the value we could from Docker, Docker Swarm, Docker Compose, and its time to move on. I’m pretty sure the journey will take most of the year, but Kubernetes has come a long way from the time we first heard of it, sitting in the Docker offices in San Francisco. After working with Darren and the Rancher team for a year, I’m obviously going to start with k3s, add TimescaleDB, and then layer our apps over the top.
sven@x1carbon:~/src$ curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.19.5+k3s2 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
sven@x1carbon:~/src$ sudo k3s kubectl get node
NAME STATUS ROLES AGE VERSION
x1carbon Ready master 11m v1.19.5+k3s2
 yup, talk about “it just works” – next time, need to remember the --write-kubeconfig-mode "0644"option 🙂
sven@x1carbon:~$ curl --proto '=https' --tlsv1.2 -sSLf https://tsdb.co/install-tobs-sh |sh

Downloading tobs_0.1.3_Linux_x86_64...
Download complete!

Validating checksum...
Checksum valid.

tobs 0.1.3 was successfully installed ?


Add the tobs CLI to your system binaries with:

sudo cp /home/sven/.tobs/bin/tobs /usr/local/bin

Alternatively, add tobs to your path in the current session with: export PATH=$PATH:/home/sven/.tobs/bin

After starting your Kubernetes cluster, run

tobs install

sven@x1carbon:~$ sudo cp /home/sven/.tobs/bin/tobs /usr/local/bin
sven@x1carbon:~$ tobs install
Adding Timescale Helm Repository
Error: could not install The Observability Stack: exec: "helm": executable file not found in $PATH
Ah, yes, k3s does lots of things, but it doesn’t give you the helm cli… potter off to the latest release tag of helm, after finding a GH issue on k3s about the lack of helm cli support – horrible UX: download using Firefox, then…
sven@x1carbon:~/Downloads$ tar xvf helm-v3.4.2-linux-amd64.tar.gz 
linux-amd64/
linux-amd64/helm
linux-amd64/README.md
linux-amd64/LICENSE
sven@x1carbon:~/Downloads$ cp linux-amd64/helm /usr/local/bin/
sven@x1carbon:~/Downloads$ chmod 755 /usr/local/bin/h
chmod: cannot access '/usr/local/bin/h': No such file or directory
sven@x1carbon:~/Downloads$ chmod 755 /usr/local/bin/helm
sven@x1carbon:~/Downloads$ helm
The Kubernetes package manager

------------8<------------

sven@x1carbon:~/Downloads$ tobs install
Adding Timescale Helm Repository
"timescale" has been added to your repositories
Fetching updates from repository
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "timescale" chart repository
Update Complete. ⎈Happy Helming!⎈
Installing The Observability Stack
Error: could not install The Observability Stack: exit status 1
Output: Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
 oh, yeah, k3s – so need
sven@x1carbon:~/Downloads$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml 

sven@x1carbon:~/Downloads$ tobs install
Adding Timescale Helm Repository
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
"timescale" already exists with the same configuration, skipping
Fetching updates from repository
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "timescale" chart repository
Update Complete. ⎈Happy Helming!⎈
Installing The Observability Stack
Waiting for pods to initialize...
2020/12/28 11:25:35 stat /home/sven/.kube/config: no such file or directory
looks like Tobs makes a bad assumption too –
sven@x1carbon:~/Downloads$ tobs grafana change-password 'something'
2020/12/28 11:26:43 stat /home/sven/.kube/config: no such file or directory
yup. tobs is running tho – so its ‘just’ because I’ve not created a the ~/.kube/config file 🙂
sven@x1carbon:~/Downloads$ helm list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/rancher/k3s/k3s.yaml
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
tobs default 1 2020-12-28 11:25:24.993759053 +1000 AEST deployed tobs-0.1.3 0.1.3
creating an empty ~/.kube/config isn’t quite enough – but there’s a PR for this issue. build it, and use it – it works…
sven@x1carbon:~/Downloads$ tobs grafana change-password 'something'
Updating secret...
Changing password...
t=2020-12-28T01:45:10+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=postgres
t=2020-12-28T01:45:10+0000 lvl=info msg="Starting DB migrations" logger=migrator

Admin password changed successfully ✔

sven@x1carbon:~/Downloads$ tobs grafana port-forward
Listening to pod tobs-grafana-786cf49767-8f4r4 from port 8080
Forwarding from 127.0.0.1:8080 -> 3000
Forwarding from [::1]:8080 -> 3000
at which point, I stopped writing, made a “Ruuvi-tag to Prometheus metrics” program, and have been watching the temperatures of our new fridge, outside and inside. Its clearly time to redo all this using Terraform.

elktail: commandline tailing of Elasticsearch with Docker

I’ve been working on a system that uses Elasticsearch on Docker Swarm, and today, I really wanted to grep some log files.
The closest thing I found was elktail – see http://knes1.github.io/blog/2016/2016-03-06-elktail-command-line-tool-for-tailing-and-querying-ELK-logs.html

Of course, I needed it in a container, so I could attach it to the Swarm stack’s network…

So I forked, merged in all the other forks I found quickly, and then set up svendowideit/elktail as an autobuild image on Docker Hub.

So now, I have a Bash alias:

alias logs='docker run --rm -it --net elasticsearch_esnetwork svendowideit/elktail --url http://elasticsearch:9200 -f "%log" -i "*"'

and can quickly see what’s up with the system by running:

logs | grep sub-system

Working locally with Docker while on the road.

I just got back from a conference trip to Europe, DockerCon in Copenhagen, All-Systems-Go in Berlin, and then Open Source Summit in Prague.

While I was there, I needed to continue doing the development of some major refactoring of RancherOS for my LinuxKit talk – which meant I needed to be able to “docker pull” from both builds and temporary testing VM’s, from horrible hotel Wifi, and from overloaded conference Wifi – not great.

Because the bandwidth from Australia isn’t wonderful, I had already modified the builds and test tooling to use a local Docker registry mirror – so all I really needed to do was setup the same caching infrastructure I have on my home office network, but on a roaming notebook… and yup, its pretty straightforward:

sven@y260:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc4f1ff259f7 b8efb18f159b "nginx -g 'daemon ..." 2 weeks ago Up 13 days 0.0.0.0:80->80/tcp nginx
458c808f9218 3ebefe7c539b "/entrypoint.sh /e..." 2 weeks ago Up 13 days 0.0.0.0:5555->5000/tcp mirror
e09411b42128 3ebefe7c539b "/entrypoint.sh /e..." 2 weeks ago Up 13 days 0.0.0.0:5000->5000/tcp registry
5d572c8e8658 46fc18186a54 "/bin/sh -c 'chmod..." 2 weeks ago Up 9 days 0.0.0.0:3142->3142/tcp apt-cacher

These containers were started using docker run:


#apt-cacher-ng
docker run -d --network host --restart=always --name apt-cacher -v /var/cache/apt-cacher-ng:/var/cache/apt-cacher-ng svendowideit/apt-cacher-ng
#a docker registry
docker run -d --network host --name registry --restart always registry
#a hub registry mirror
docker run -d --name mirror --network host -v /var/lib/registry-mirror:/registry -e STORAGE_PATH=/registry -e STANDALONE=false -e MIRROR_SOURCE=https:/registry-1.docker.io -e MIRROR_SOURCE_INDEX=https://index.docker.io registry
docker exec mirror sh -c "echo 'proxy:' >> /etc/docker/registry/config.yml"
docker exec mirror sh -c "echo ' remoteurl: https://registry-1.docker.io' >> /etc/docker/registry/config.yml"
docker exec mirror sh -c "sed -i~ 's/addr: :5000/addr: :5555/g' /etc/docker/registry/config.yml"
docker restart mirror

The docker daemon will automatically use the local registry, and the local registry mirror on the “localhost” port 5000&5555 – and so long as you can give your VM’s a valid IP to your host, that too will work..

I have the following in my .bashrc, and these environment vars are used by the build and test tooling to pass on to the VM’s (ok, so this works because I have VMWare workstation on my Linux box):


export APTPROXY=http://$(ip a | grep "inet " | grep global | grep vmnet1 | sed 's/ //' | cut -d " " -f 2 | sed 's/\/.//'):3142
export ENGINE_REGISTRY_MIRROR=http://$(ip a | grep "inet " | grep global | grep vmnet1 | sed 's/ //' | cut -d " " -f 2 | sed 's/\/.//'):5555
export RANCHER_REPO=http://$(ip a | grep "inet " | grep global | grep vmnet1 | sed 's/ //' | cut -d " " -f 2 | sed 's/\/.//')/

Today’s container: JSON resume cli

Sven Dowideit
Today’s task ended up being to update my Resume. Well, it turns out it was time to create a new one – so I turned to JSON Resume CLI – which converts the machine readable info I maintain into something that doesn’t look like the word document I’ve been carrying around since I graduated in 1995.

Happily, node.js works well using alpine, so my Dockerfile (see the GH repo for more) looks like:


FROM alpine

EXPOSE 4000
WORKDIR /data
ENTRYPOINT ["resume"]

RUN apk add --no-cache nodejs \
&& npm install -g resume-cli \
&& sed -i~ "s/localhost/0.0.0.0/g" /usr/lib/node_modules/resume-cli/index.js /usr/lib/node_modules/resume-cli/lib/serve.js

Pretty simple – but it means that I don’t have to install node.js on whatever computers I’m using – today I was using 3 different computers..

The most important thing to note is the ENTRYPOINT ["resume"] – it means that you can alias a “docker run” command that will work as though the program was installed on your host:


$ alias resume='docker run --rm -it -v $(pwd):/data/ -p 4000:4000 svendowideit/jsonresume'
$ resume --help
Checking NPM for latest version...
Your resume-cli software is up-to-date.

Usage: resume [command] [options]

Commands:

init Initialize a resume.json file
register Register an account at https://registry.jsonresume.org
login Stores a user session.
settings Change theme, change password, delete account.
test Schema validation test your resume.json
export [fileName] Export locally to .html or .pdf. Supply a --format flag and argument to specify export format.
publish Publish your resume to https://registry.jsonresume.org
serve Serve resume at http://0.0.0.0:4000/

Options:

-h, --help output usage information
-V, --version output the version number
-t, --theme Specify theme for export or publish (modern, traditional, crisp)
-F, --force Used by publish - bypasses schema testing.
-f, --format Used by export.
-r, --resume Used by serve (default: resume.json)
-p, --port Used by serve (default: 4000)
-s, --silent Used by serve to tell it if open browser auto or not.
-d, --dir Used by serve to indicate a public directory path.

I guess I can pop back up my todo stack to the Android Studio container I started yesterday 🙂

Looking for a new challenge

After two and a half years, my contract with Docker Inc has finished.

I’m available for short term Docker consultations and training courses. I’m in Brisbane – but I’m happy to talk to you about flying out to where you are.

After two and a half years, my contract with Docker Inc has finished.

Its been a blast – I’ve never worked in a startup before, and I was hired early enough to have a very broad scope of work – including supporting users, working as a maintainer and oss contributor, and leading the development of Boot2Docker (now replaced by Docker for Mac and Windows) – a micro Linux distribution with OSX and Windows installers to allow users on those platforms to use Docker.

Its been amazing seeing the inside of an incredibly dynamic, game-changing project that has succeeded in growing for the three years that its been around – TWiki was successful around the 2000, but never managed to convert in the way that Docker has.

I’ve spent the last week cleaning up my email, git repositories and getting started on the non-computer projects that have been languishing for the last 3 years – and started playing with the ESP8266 I have – Like I said to Nathan the other day, having a relaxing time writing C++ 🙂

Until I find the next big project, startup, or workplace?:

I’m available for short term Docker consultations and training courses. I’m in Brisbane – but I’m happy to talk to you about flying out to where you are.

Using Docker to quickly and safely reproduce issues

I had a problem following an installation the other day, and eventually we tracked it down.

This week, I was curious to see if things were fixed, but I had already installed the tool on my computer.

So I ran up Docker container:

$ docker run --rm -it --name test debian bash
root@14e9e953d708:/# apt-get update && apt-get install -yq curl sudo vim-tiny
Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]
Get:2 http://security.debian.org jessie/updates/main amd64 Packages [182 kB]
Ign http://httpredir.debian.org jessie InRelease
Get:3 http://httpredir.debian.org jessie-updates InRelease [135 kB]
Get:4 http://httpredir.debian.org jessie Release.gpg [2373 B]
Get:5 http://httpredir.debian.org jessie Release [148 kB]
Get:6 http://httpredir.debian.org jessie-updates/main amd64 Packages [3653 B]
Get:7 http://httpredir.debian.org jessie/main amd64 Packages [9035 kB]
Fetched 9569 kB in 16s (574 kB/s)
Reading package lists... Done
Reading package lists...
Building dependency tree...
The following extra packages will be installed:
ca-certificates krb5-locales libcurl3 libffi6 libgmp10 libgnutls-deb0-28 libgssapi-krb5-2 libhogweed2
libidn11 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 libnettle4 libp11-kit0
librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.0.0 libtasn1-6 openssl
vim-common
Suggested packages:
gnutls-bin krb5-doc krb5-user libsasl2-modules-otp libsasl2-modules-ldap libsasl2-modules-sql
libsasl2-modules-gssapi-mit libsasl2-modules-gssapi-heimdal indent
The following NEW packages will be installed:
ca-certificates curl krb5-locales libcurl3 libffi6 libgmp10 libgnutls-deb0-28 libgssapi-krb5-2
libhogweed2 libidn11 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 libnettle4
libp11-kit0 librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.0.0 libtasn1-6
openssl sudo vim-common vim-tiny
0 upgraded, 28 newly installed, 0 to remove and 1 not upgraded.
Need to get 9322 kB of archives.
After this operation, 19.7 MB of additional disk space will be used.
Get:1 http://security.debian.org/ jessie/updates/main libsasl2-modules-db amd64 2.1.26.dfsg1-13+deb8u1 [67.1 kB]
Get:2 http://security.debian.org/ jessie/updates/main libsasl2-2 amd64 2.1.26.dfsg1-13+deb8u1 [105 kB]
Get:3 http://security.debian.org/ jessie/updates/main libldap-2.4-2 amd64 2.4.40+dfsg-1+deb8u1 [218 kB]
...
root@14e9e953d708:/# adduser --ingroup sudo sven
Adding user sven' ...
Adding new user
sven' (1000) with group sudo' ...
Creating home directory
/home/sven' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for sven
Enter the new value, or press ENTER for the default
Full Name []: sven
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n]
root@14e9e953d708:/#

and then in another terminal:


$ docker exec -it -u sven insane_colden bash
sven@14e9e953d708:/$ sudo env

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for sven:
HOSTNAME=14e9e953d708
TERM=xterm
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arc=01;31:.arj=01;31:.taz=01;31:.lha=01;31:.lz4=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.tzo=01;31:.t7z=01;31:.zip=01;31:.z=01;31:.Z=01;31:.dz=01;31:.gz=01;31:.lrz=01;31:.lz=01;31:.lzo=01;31:.xz=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.m4a=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.axa=00;36:.oga=00;36:.spx=00;36:.xspf=00;36:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/bin/bash
MAIL=/var/mail/root
LOGNAME=root
USER=root
USERNAME=root
HOME=/root
SUDO_COMMAND=/usr/bin/env
SUDO_USER=sven
SUDO_UID=1000
SUDO_GID=27
sven@14e9e953d708:/$

(ok, so the thing I was testing was something else)

The point is, without using alot of time, diskspace, or effort, I created a debian environment, created the environment I needed, and then could run my test as the user I needed.

If this was more than a once-off, I’d do the setup in a Dockerfile, and make the command I’m testing be that Dockerfile’s ENTRYPOINT – making it possible to run a suite of tests using docker build -t test . && docker --rm run test

Docker on Windows Server Preview TP3 with wifi

Doesn’t work. Especially if, like me, you have a docking station usb 3 ethernet, an on-board ethernet, use wifi on many different access-points, and use your mobile phone for network connectivity.

The Docker daemon is started by running

net start docker

, which runs

C:\ProgramData\docker\runDockerDaemon.cmd

.

In that script, you’ll see the “virtual switch” (

docker daemon -D -b "Virtual Switch"

) is used for networking – and that (at least in my case) appears to be bound to the ethernet I had when I installed.

Same pain point as trying to use Hyper-V VM’s for roaming development.

Uninstalling Hyper-V leaves us in an interesting place:

ending build context to Docker daemon 2.048 kB
Step 0 : FROM windowsservercore
 ---> 0d53944cb84d
Step 1 : RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))"
 ---> Running in ad8fb58ba732
HCSShim::CreateComputeSystem - Win32 API call returned error r1=3224830464 err=A virtual switch with the given name was not found. id=ad8fb58ba732880aaace7b4e3288212aa9493083848cf0324de310520b523d21 configuration={"SystemType":"Container","Name":"ad8fb58ba732880aaace7b4e3288212aa9493083848cf0324de310520b523d21","Owner":"docker","IsDummy":false,"VolumePath":"\\\\?\\Volume{63828c05-49f4-11e5-89c2-005056c00008}","Devices":[{"DeviceType":"Network","Connection":{"NetworkName":"Virtual Switch","EnableNat":false,"Nat":{"Name":"ContainerNAT","PortBindings":null}},"Settings":null}],"IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\ad8fb58ba732880aaace7b4e3288212aa9493083848cf0324de310520b523d21","Layers":[{"ID":"f0d4aaa3-c43d-59c1-8ad0-44e6b3381efc","Path":"C:\\ProgramData\\Microsoft\\Windows\\Images\\CN=Microsoft_WindowsServerCore_10.0.10514.0"}]}

looks like the virtual switch made for containers was removed at some point (might have been when I installed Hyper-V, I’m not sure)

Running

Get-VMSwitch

returns nothing.

So I installed VMWare Workstation and made a Boot2Docker VM with both NAT and private networking – both vmware based virtual networks continue to work when moving between wifi and ethernet.

So lets see if we can make one in powershell, using the VMWare NAT adaptor (see http://blogs.technet.com/b/heyscriptingguy/archive/2013/10/09/use-powershell-to-create-virtual-switches.aspx)

<

pre>
PS C:\Users\sven\src\WindowsDocker> Get-NetAdapter

Name InterfaceDescription ifIndex Status MacAddress LinkSpeed
—- ——————– ——- —— ———- ———
VMware Network Adapte…8 VMware Virtual Ethernet Adapter for … 28 Up 00-50-56-C0-00-08 100 Mbps
VMware Network Adapte…1 VMware Virtual Ethernet Adapter for … 27 Up 00-50-56-C0-00-01 100 Mbps
Wi-Fi Intel(R) Dual Band Wireless-AC 7260 4 Disabled 5C-51-4F-BA-12-6F 0 bps
Ethernet Intel(R) Ethernet Connection I218-LM 3 Up 28-D2-44-4D-B6-64 1 Gbps

VMWare helpfully provides a Virtual Network editor, so I can see that “Get-NetAdapter -Name “VMware Network Adapter VMnet8” is the NAT one. I’m not sure if creating a Hyper-V External vswitch will make exclusive use of the adaptor, but if so, we can always create another 🙂

PS C:\Users\sven\src\WindowsDocker> New-VMSwitch  -Name "VMwareNat" -NetAdapterName "VMware Network Adapter VMnet8" -AllowManagementOS $true -Notes "Use VMnet8 to create a roamable Docker daemon network"

Name      SwitchType NetAdapterInterfaceDescription
----      ---------- ------------------------------
VMwareNat External   VMware Virtual Ethernet Adapter for VMnet8

now to edit the runDockerDaemon.cmd, and restart the Docker Daemon.

FAIL. the docker containers still have no network. At this point, I’m not sure if I’ve totally broken my Windows Docker networking, hopefully some more playing later will turn up something.

Playing some more, there seems to be a new switchtype Nat – see https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Install-ContainerHost/Install-ContainerHost.ps1

So re-running the command they use when installing gets us something new to try:

PS C:\Users\sven\src\WindowsDocker> new-vmswitch -Name nat -SwitchType NAT -NatSubnetAddress "172.16.0.0/12"

Name SwitchType NetAdapterInterfaceDescription
---- ---------- ------------------------------
nat  NAT


PS C:\Users\sven\src\WindowsDocker> Get-VMSwitch

Name      SwitchType NetAdapterInterfaceDescription
----      ---------- ------------------------------
VMwareNat External   VMware Virtual Ethernet Adapter for VMnet8
nat       NAT

it works when the ethernet is plugged in, but not on wifi.

yup – bleeding edge dev 🙂

Docker on Windows Server 2016 tech preview 3

‘docker run –rm -it vim’ almost works running in a native Windows Container

First thing is to install Windows 2016 – I started in a VM, but I’m rapidly thinking i might try it on my notebook – Windows 10 is getting old already 🙂

Then goto https://msdn.microsoft.com/virtualization/windowscontainers/quick_start/inplace_setup . Note that the powershell script will download another 3GB.

Windows-system32-docker

And now – you can run docker info from either cmd.exe, or powershell.

There’s only a limited set of images you can download from Microsoft – docker search seems to always reply with the same set:

PS C:\Users\Administrator> docker search anything
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
microsoft/iis Internet Information Services (IIS) instal... 1 [OK] [OK]
microsoft/dnx-clr .NET Execution Environment (DNX) installed... 1 [OK] [OK]
microsoft/ruby Ruby installed in a Windows Server Contain... 1 [OK]
microsoft/rubyonrails Ruby on Rails installed in a Windows Serve... 1 [OK]
microsoft/python Python installed in a Windows Server Conta... 1 [OK]
microsoft/go Go Programming Language installed in a Win... 1 [OK]
microsoft/mongodb MongoDB installed in a Windows Server Cont... 1 [OK]
microsoft/redis Redis installed in a Windows Server Contai... 1 [OK]
microsoft/sqlite SQLite installed in a Windows Server Conta... 1 [OK]

I downloaded two, and this shows’s they’re re-using the windowsservercore image as their common base image:

PS C:\Users\Administrator> docker images -a
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
microsoft/go latest 33cac80f92ea 2 days ago 10.09 GB
  8daec63ffb52 2 days ago 9.75 GB
  fbab9eccc1e7 2 days ago 9.697 GB
microsoft/dnx-clr latest 156a0b59c5a8 2 days ago 9.712 GB
  28473be483a9 2 days ago 9.707 GB
  56b7e372f76a 2 days ago 9.697 GB
windowsservercore 10.0.10514.0 0d53944cb84d 6 days ago 9.697 GB
windowsservercore latest 0d53944cb84d 6 days ago 9.697 GB

PS C:\Users\Administrator> docker history microsoft/dnx-clr
IMAGE CREATED CREATED BY SIZE COMMENT
156a0b59c5a8 2 days ago cmd /S /C setx PATH "%PATH%;C:\dnx-clr-win-x6 5.558 MB
28473be483a9 2 days ago cmd /S /C REM (nop) ADD dir:729777dc7e07ff03f 9.962 MB
56b7e372f76a 2 days ago cmd /S /C REM (nop) LABEL Description=.NET Ex 41.41 kB
0d53944cb84d 6 days ago 9.697 GB
PS C:\Users\Administrator> docker history microsoft/go
IMAGE CREATED CREATED BY SIZE COMMENT
33cac80f92ea 2 days ago cmd /S /C C:\build\install.cmd 335 MB
8daec63ffb52 2 days ago cmd /S /C REM (nop) ADD dir:898a4194b45d1cc66 53.7 MB
fbab9eccc1e7 2 days ago cmd /S /C REM (nop) LABEL Description=GO Prog 41.41 kB
0d53944cb84d 6 days ago 9.697 GB

And so the fun begins.

PS C:\Users\Administrator> docker run --rm -it windowsservercore cmd

gives you a containerized shell.

Lets try to build an image that has the chocolatey installer:

FROM windowsservercore

RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))"

CMD powershell

and then use that image to install…. vim

FROM chocolatey

RUN choco install -y vim

It works!

 docker run --rm -it vim cmd

and then run

C:\Program Files (x86)\vim\vim74\vim.exe

Its not currently usable, I suspect because the ANSI terminal driver is really really new code – but BOOM!

I haven’t worked out how to get the Dockerfile

CMD

or

ENTRYPOINT

to work with paths that have spaces – it doesn’t seem to support the array form yet…

I’m going to keep playing, and put the Dockerfiles into https://github.com/SvenDowideit/WindowsDocker

Don’t forget to read the documentation at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/containers_welcome

Slim application containers (using Docker)

Another talk I gave at Linux.conf.au, was about making slim containers (youtube) – ones that contain only the barest essentials needed to run an application.

In the same way that we don’t ship the VM / filesystem of our build server, you should not be shipping the container you’re building from source.

Another talk I gave at Linux.conf.au, was about making slim containers (youtube) –  ones that contain only the barest essentials needed to run an application.

And I thought I’d do it from source, as most “Built from source” images also contain the tools used to build the software.

1. Make the Docker base image you’re going to use to build the software

In January 2015, the main base images and their sizes looked like:

scratch             latest              511136ea3c5a        19 months ago       0 B
busybox             latest              4986bf8c1536        10 days ago         2.433 MB
debian              7.7                 479215127fa7        10 days ago         85.1 MB
ubuntu              15.04               b12dbb6f7084        10 days ago         117.2 MB
centos              centos7             acc1b23376ec        10 days ago         224 MB
fedora              21                  834629358fe2        10 days ago         250.2 MB
crux                3.1                 7a73a3cc03b3        10 days ago         313.5 MB

I’ll pick Debian, as I know it, and it has the fewest restrictions on what contents you’re permitted to redistribute (and because bootstrapping busybox would be an amazing talk on its own).

Because I’m experimenting, I’m starting by seeing how small I can make a new Debian base image –  starting with:

FROM debian:7.7

RUN rm -r /usr/share/doc /usr/share/doc-base \
          /usr/share/man /usr/share/locale /usr/share/zoneinfo

CMD ["/bin/sh"]

Then make a new single layer (squashed image) by running docker export and docker import

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
debian              7.7                 479215127fa7        10 days ago         85.1 MB
our/debian:jessie   latest              cba1d00c3dc0        1 seconds ago       46.6 MB

Ok, not quite half, but you get the idea.

Its well worth continuing this exercise using things like dpkg —get-selections to remove anything else you won’t need.

Importantly, once you’ve made your smaller base image, you should use it consistently for ALL the containers you use. This means that whenever there are important security fixes, that base image will be downloadable as quickly as possible –  and all your related images can be restarted quickly.

This also means that you do NOT want to squish your images to one or two layers, but rather into some logical set of layers that match your deployment update risks –  a common root base, and then layers based on common infrastructure, and lastly application and customisation layers.

2. Build static binaries –  or not

Building a static binary of your application (in typical Go style) makes some things simpler –  but in the end, I’m not really convinced it makes a useful difference.

But in my talk, I did it anyway.

Make a Dockerfile that installs all the tools needed, builds nginx, and then output’s a tar file that is a new build context for another Docker image (and contains the libraries ldd tells us we need):

cat Dockerfile.build-static-nginx | docker build -t build-nginx.static -
docker run --rm build-nginx.static cat /opt/nginx.tar > nginx.tar
cat nginx.tar | docker import - micronginx
docker run --rm -it -p 80:80 micronginx /opt/nginx/sbin/nginx -g "daemon off;"
nginx: [emerg] getpwnam("nobody") failed (2: No such file or directory)

oh. I need more than just libraries?

3. Use inotify to find out what files nginx actually needs!

Use the same image, but start it with Bash –  use that to install and run inotify, and then use docker exec to start nginx:

docker run --rm build-nginx.static bash
$ apt-get install -yq inotify-tools iwatch
# inotifywait -rm /etc /lib /usr/lib /var
Setting up watches.  Beware: since -r was given, this may take a while!
Watches established.
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_files-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_nis-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE ld-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libc-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnsl-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_compat-2.13.so
/etc/ OPEN passwd
/etc/ OPEN group
/etc/ ACCESS passwd
/etc/ ACCESS group
/etc/ CLOSE_NOWRITE,CLOSE group
/etc/ CLOSE_NOWRITE,CLOSE passwd
/etc/ OPEN localtime
/etc/ ACCESS localtime
/etc/ CLOSE_NOWRITE,CLOSE localtime

Perhaps it shouldn’t be too surprising that nginx expects to rifle through your user password files when it starts 🙁

4. Generate a new minimal Dockerfile and tar file Docker build context, and pass that to a new `docker build`

The trick is that the build container Dockerfile can generate the minimal Dockerfile and tar context, which can then be used to build a new minimal Docker image.

The excerpt from the Dockerfile that does it looks like:


# Add a Dockerfile to the tar file
RUN echo "FROM busybox" > /Dockerfile \
    && echo "ADD * /" >> /Dockerfile \
    && echo "EXPOSE 80 443" >> /Dockerfile \
    && echo 'CMD ["/opt/nginx/sbin/nginx", "-g", "daemon off;"]' >> /Dockerfile

RUN tar cf /opt/nginx.tar \
           /Dockerfile \
           /opt/nginx \
           /etc/passwd /etc/group /etc/localtime /etc/nsswitch.conf /etc/ld.so.cache \
           /lib/x86_64-linux-gnu

This tar file can then be passed on using

cat nginx.tar | docker build -t busyboxnginx .

Result

Comparing the sizes, our build container is about 1.4GB, the Official nginx image about 100MB, and our minimal nginx container, 21MB to 24MB –  depending if we add busybox to it or not:

REPOSITORY          TAG            IMAGE ID            CREATED              VIRTUAL SIZE
micronginx          latest         52ec332b65fc        53 seconds ago       21.13 MB
nginxbusybox        latest         80a526b043fd        About a minute ago   23.56 MB
build-nginx.static  latest         4ecdd6aabaee        About a minute ago   1.392 GB
nginx               latest         1822529acbbf        8 days ago           91.75 MB

Its interesting to remember that we rely heavily on I know this, its a UNIX system –  application services can have all sorts of hidden assumptions that won’t be revealed without putting them into more constrained environments.

In the same way that we don’t ship the VM / filesystem of our build server, you should not be shipping the container you’re building from source.

This analysis doesn’t try to restrict nginx to only opening certain network ports, devices, or IPC mechanisms – so there’s more to be done…