Tag Archives: docker

Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from https://github.com/hypriot/rpi-python/blob/master/Dockerfile
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <hypriot-user@hypriot.com>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <my-address@domain.com>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <my-user@domin.com>

# Install application files
ADD side-channel-monitor.py /var/www/uwsgi/flask/side-channel-monitor/side-channel-monitor.py

# Make a port externally available
EXPOSE 7071

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/side-channel.py", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "0.0.0.0:7071", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).

References

Although linked to in the post,

Advertisements

Farewell r-pi Docker

Looks like I’ll probably have to abandon my adventures with Docker on the r-pi.

Any attempt to to RUN  command (say, apt-get update) in the Dockerfile, I get the following error:

The command '/bin/sh -c 'apt-get update'' returned a non-zero code: 139

This equates to a segmenatation fault, although the command runs fine from a standard shell.

I tried running the docker daemon in debug mode but nothing untoward is reported.

But a quick glance at /var/log/daemon.log shows the following:

systemd[1]: Failed to set cpu.cfs_period_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied
systemd[1]: Failed to set cpu.cfs_quota_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied

At friends at Google then point us at https://github.com/opencontainers/runc/issues/57 and pretty much everything checks out except for /sys/fs/cgroup/cpu/cpu.cfs_quota_us. Any potential issue for Debian/Jessie should have been fixed a while back.

Oddly, everything including and under /sys/fs/cgroup has a timestamp of 1 Jan 1970 suggesting that something’s not quite right.

This is now happening on two different SD cards on different r-pi’s with different versions of Hypriot; it’s not a Docker issue. Attempting to apply the latest updates gives further breakage. Time to say goodbye and head back to x86.

Docker or Unikernel?

After some recent reading, I’m torn between getting back into building Docker images – I lost all my previous builds when the SD card on my r-pi got corrupted and had to be built from scratch – or latest flavour of the month, unikernels.

Well, I was having fun with Docker and my new job will start looking at the tech at some point and it’d be a shame not to press ahead with it.

So, I’ll use https://hub.docker.com/r/armv7/armhf-ubuntu/ as the new base and look at deploying a Rails application as per my original intention.

https://github.com/umiddelb/armhf/wiki/Installing,-running,-using-docker-on-armhf-(ARMv7)-devices is the best guide for getting this working.

Giving up on rails on Docker

Having gone through the process of preparing s Rails image from a ruby build has ballooned out to over 1.2GB even after removing the compiler and associated packages.

I even tested running a bundle update to identify the gems that need native compilation: mysql2, bcrypt, therubyracer.

And then… And then after downloading the application running the bundle update borked when bcrypt 3.1.9 is needed as a dependency instead of the 3.1.10 installed. Given the time it takes to compile gems, it’s not reasonable or sensible to try and stay on top of deploying rails applications to a container.

Okay, so we’ll try to do a WordPress deployment instead.

https://docs.docker.com/compose/wordpress/

Docker image building – a time consuming process

While I have to use the following construct to ensure that the Ruby environment is setup for building gems, and it’s a recommended practice for Docker,

RUN /bin/bash -c "source /usr/local/rvm/scripts/rvm \
 && gem update --system --no-rdoc --no-ri \
 && gem update --no-rdoc --no-ri \
 && gem install --no-rdoc --no-ri bundler \
 && gem install --no-rdoc --no-ri libv8 \
 && gem install --no-rdoc --no-ri mysql2"

One implication of this is that it is an atomic operation and when you discover that the libmysqlclient-dev package is missing and the build needs to be run again, there’s no cache to fall back on. Ruby takes 5 hours on an r-pi, libv8 is 2 hours. This is not quick turnaround stuff for the background build although it will really improve Rails container deployment times. Always a tradeoff.

Docker and Ruby

Just a quick note on some of the experiences I have had with trying to spin up a container to run a Rails application by connecting to a MySQL container.

Some observations,

  • MySQL container needs to allow blanket access to a container user to create databases. Admittedly, this can be constrained to the private 172.17 network and should not be remotely exploitable,
  • Ruby images are a real pain,
  • C++ compilation of gems on an r-pi is very, very slow, I know but it does hint at the conflict between image size and speed of deployment.
  • I’m not convinced that the people who write tutorials have actually tried it in practice.

MySQL container access

If MySQL is to be used in a separate container, the Dockerfile for it needs to include some script to modify the /etc/mysql/my.cnf (or wherever) file to change the default bind address from 127.0.0.1 to the IP of the running container

ENV SERVICE mysql

and to include a script to do the magic and update my.cnf


RUN mkdir -p /admin/scripts 
COPY scripts/mysql_start.sh /admin/scripts/
RUN chmod 744 /admin/scripts/mysql_start.sh

The script itself looks like,

#!/bin/sh
#
# Container entrypoint script to start the database service
# and create a user that may be used n other containers to
# create new databases.
#
# rubynuby/june 2015
#

# Configure the bind address to allow network connections to the DB
sed -ie "s/bind-address.*/bind-address\t= `hostname -I`/" /etc/mysql/my.cnf

# Start the service and create the user
/usr/sbin/service ${SERVICE} start

echo "GRANT CREATE ON *.* TO '${DBUSER}'@'%' IDENTIFIED BY '${DBPASS};'" | mysql
 -u root --password="${ROOTPW}" mysql

exit 0

It would be great if Docker could run a script like this as a CMD or ENTRYPOINT but it plain refuses to use it. MySQL in Docker is hard.

Ruby Docker images

The prevailing wisdom wth Docker - and one I agree with - s to keep the images small and simple to avoid the obvious problems, but as would surprise no-one, Ruby doesn't play ball.

Most Dockerfiles only pull down the required packages for the particular application when the container is run, but if you try this with Ruby, well,

  • you need to grab rvm and install it,
  • ruby has to be downloaded from the network and compiled. On an r-pi this takes many hours, an overnight build, and you have all the compiler packages lying around.
  • Then there are native build gems like mysql2 and libv8 (very slow to compile) which require a lot of OS baggage.
  • Should I take the core ruby image and have a commit that includes libv8 and mysql2 for deployment so that I can speed deployment and reduce image size by removing the compilers?

Ruby in Docker is hard.

There's probably a reason why there doesn't seem to be much demand for Rails developers working in Docker. These are not natural technology partners.

Container tutorials

Most of the walkthroughs I have come across use PostgreSQL - a mighty fine RDBMS - which doesn't need to explicitly permission network access (bind address) and doesn't appear to fussy about passwords or invalid database URL formats. It's harder wth MySQL in practice.

Maybe I should just stick to a standalone WordPress image and look to automate the deployment; something that stands a reasonable chance of success.