Category Archives: Containers

Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from https://github.com/hypriot/rpi-python/blob/master/Dockerfile
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <hypriot-user@hypriot.com>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <my-address@domain.com>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <my-user@domin.com>

# Install application files
ADD side-channel-monitor.py /var/www/uwsgi/flask/side-channel-monitor/side-channel-monitor.py

# Make a port externally available
EXPOSE 7071

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/side-channel.py", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "0.0.0.0:7071", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).

References

Although linked to in the post,

Arch image for r-pi from scratch

I’ve decided to brave it and create an install image for Arch Linux to run on am r-pi. And to do it without direct access to the SD card, by creating the filesystems on loopback devices which I will then dd to an image file tht I will copy to the Mac and then burn to the SD card.

No idea if it will work.

http://archlinuxarm.org/platforms/armv6/raspberry-pi describes creating two filesystems, one 100MB formatted as VFAT and a 15.9GB ext4 filesystem.

First things first: create the files to be used as the filesystems. This needs a wee bit of arithmetic to convert 15.9GB to KB.

16GB is 1073741824B and we need to subtract 100M, which can be found with the commands,

$ echo '100 * 1024 * 1024'|bc
104857600
echo '17179869184 - 104857600' | bc
17075011584
$ echo '17075011584 / 1024' | bc
16674816

This gives 16674816KB as the 15.9GB disk file. Then we can use the truncate command to create the files for the disk images (whatever happened to the mkfile command).

# truncate -s 100M rpi-arch-root.img
# truncate -s 16674816K rpi-arch-ext4.img

Then we create the loopback devices on which we create actual filesystems,

# losetup -f rpi-arch-root.img
# mkfs.vfat /dev/loop0
# losetup -f rpi-arch-ext4.img
# mkfs.ext4 /dev/loop1

Then back to the Arch instructions

# mkdir root boot
# mount /dev/loop0 boot
# mount /dev/loop1 root

Then grab the Arch image files and unpack them on the filesystems as instructed,

# wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-latest.tar.gz
# bsdtar -xpf ArchLinuxARM-rpi-latest.tar.gz -C root
# mv root/boot/* boot

Then we unmount the filesystems and try to figure a way of getting them on to the SD card with a Mac.

# umount /dev/loop0
# umount /dev/loop1

I suspect some nifty footwork with dd coming up to create a 16GB image file that we copy to the Mac and dd to the SD card. More to follow…

References

I used https://samindaw.wordpress.com/2012/03/21/mounting-a-file-as-a-file-system-in-linux/ for some advice on the command to use files as loopback devices even though I created the files differently and the actual losetup commands in the article don’t work. I like to mention the pages I found useful in whatever way.

Farewell r-pi Docker

Looks like I’ll probably have to abandon my adventures with Docker on the r-pi.

Any attempt to to RUN  command (say, apt-get update) in the Dockerfile, I get the following error:

The command '/bin/sh -c 'apt-get update'' returned a non-zero code: 139

This equates to a segmenatation fault, although the command runs fine from a standard shell.

I tried running the docker daemon in debug mode but nothing untoward is reported.

But a quick glance at /var/log/daemon.log shows the following:

systemd[1]: Failed to set cpu.cfs_period_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied
systemd[1]: Failed to set cpu.cfs_quota_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied

At friends at Google then point us at https://github.com/opencontainers/runc/issues/57 and pretty much everything checks out except for /sys/fs/cgroup/cpu/cpu.cfs_quota_us. Any potential issue for Debian/Jessie should have been fixed a while back.

Oddly, everything including and under /sys/fs/cgroup has a timestamp of 1 Jan 1970 suggesting that something’s not quite right.

This is now happening on two different SD cards on different r-pi’s with different versions of Hypriot; it’s not a Docker issue. Attempting to apply the latest updates gives further breakage. Time to say goodbye and head back to x86.

Just when I thought it was safe to go down to the Dock(er)

The very innocuous Dockerfile entry

FROM armv7/armhf-ubuntu
RUN apt-get update

is throwing the following error:

The command '/bin/sh -c apt-get update' returned a non-zero code: 139

Now, this is on a completely unadulterated Docker 1.7 install and converting it to,

CMD [ "/bin/dash", "-c", "apt-get", "update" ]

works just fine but there’s no way I’m doing this for multi-line commands installing lots of packages. Changing the shell (from dash) to bash makes no difference; it appears to be that ‘sh -c’ wants the command and all arguments as one string but that isn’t what it’s getting.

So, rather than just getting on with doing the tasks I actually want, I have to chase down some stupid setting or version error.

Maybe I should have switched to Unikernels.

Docker or Unikernel?

After some recent reading, I’m torn between getting back into building Docker images – I lost all my previous builds when the SD card on my r-pi got corrupted and had to be built from scratch – or latest flavour of the month, unikernels.

Well, I was having fun with Docker and my new job will start looking at the tech at some point and it’d be a shame not to press ahead with it.

So, I’ll use https://hub.docker.com/r/armv7/armhf-ubuntu/ as the new base and look at deploying a Rails application as per my original intention.

https://github.com/umiddelb/armhf/wiki/Installing,-running,-using-docker-on-armhf-(ARMv7)-devices is the best guide for getting this working.

Connected to my first running container. Yay!

After building a r-pi MySQL Docker image, I needed to figure out how to connect to it. This turns out to be a simple matter of running the following command,

# docker run -p 33060:3306 --rm -t -i mysql-jur/mysql:5.5.43 bash
root@89594fa99e7e:/# service mysql start
[ ok ] Starting MySQL database server: mysqld . . . . . . ..
[info] Checking for tables which need an upgrade, are corrupt or were 
not closed cleanly..
root@89594fa99e7e:/# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 43
Server version: 5.5.43-0+deb7u1 (Debian)
...
mysql> create database test;

And from this point I can create databases in the normal way.

Adding ‘-p 33060:3306’ to the command proxies the database connection to the docker server’s docker0 NIC although the port is only available on IPv6 and the iptables rules are for a different address to that of the NIC. More to investigate.

Next on the list is getting the database created automatically in the container so that we can reference it from a WordPress image. Pleased that I’m making progress.

References

First experiments with Docker

To try and ease the rapid deployment of WordPress sites I am thinking of using Docker images for MySQL and WordPress along with a deployment tool to pull customisations from Github.

I’ve got a few Raspberry PIs lying around at home and a wee bit of spare time to investigate this kind of setup.

First off, Find a Linux image to burn to the SD card. The links below include a couple but 1GB isn’t enough space to install additional packages.

Be warned, however, that I’m trying to prepare the SD cards on a Mac and I’m not sure it’s going to be much for resizing the Linux disk partition.

Download the Debian image and burn it to a 16GB SD card,

# sudo -s
# dd if=Downloads/hypriot-rpi-20150301-140537.img of=/dev/disk1 bs=1m

Then we need to expand the root filesystem with – I will complete this a bit later as it can’t be done on a Mac (see resources). This happens automatically when the PI is first booted.

Then boot the PI and I would advise setting a static IP address so that we don’t need the HDMI output and allow remote SSH. Reboot.

Then SSH to the PI from your workstation and grab the dockerfiles for MySQL and WordPress.

# docker run --name WordPress -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.5

The repeat for WordPress,

# docker run --name wordpress --link some-mysql:mysql -p 8080:80 -d wordpress

Okay, so the above commands are exactly correct but you get the gist and this is really a reminder of the links I need to get back to this later on. (Probably should be tracking this via an Asana task).

Resources: