Monthly Archives: November 2015

Using process to support the team

One of the early joys of my new job is the use of Jira to manage the change control process.

This matters because Jira is quite unobtrusive and fits in naturally with the development process.

Compare this against a previous employer with as bloated a process as could possibly be imagined (there was a category for a P3 emergency) and where low-impact had to wait at least 2 days and with the expectation that teams change how they go about their work to support the process: the process is more important than getting stuff done.

And the managers wonder why the teams hated it.



Gramofile reborn

Was intending to spend the day hving fun with Docker on my Arch desktop – the r-pi version still on-hold until I can get a 16GB image file on to the SD card; 24 hours is too long when trying to copy the file over the wireless network – but I got distracted trying to sort out some old clutter from early this century (seriously).

I found a copy of a program I used many moons ago to process some of the digital recordings I had done of some of my old vinyl (prior to processing using Audacity).

Gramofile is  a curses-based program that attempts to split up an recording into different tracks by looking for blocks of silence; it does a reasonable job and its estimates can easily be tweaked.

A quick recompile and it was starting up but was having problems recognising the WAV files I have. I knew it was Gramofile’s problem becasue I was able to use an old program I started years ago to convert WAV files to ZX Spectrum tzx format (yes, I still have plenty of speccy tapes) and it was able to identify the relevant header records.

So, I have spent all day hacking some test programs to get a reasonable header processor going for WAV files. The main problem I suspect is that the code is 32-bit and some of the buffer manipulation looked a wee bit odd. But because the header records sizes are fixed I decided to an explicit copy byte-for-byte from the buffer to the header struct,

 memcpy ( &wavhd.main_chunk, hd_buf, 4 );
 memcpy ( &wavhd.length, hd_buf+4, 4 );
 memcpy ( &wavhd.chunk_type, hd_buf+8, 4 );

Another quick recompile and we’re in business big-time! A run through the first side of Psychocandy picked up 5 out of 7 tracks, just missing a couple of short silence sections between a couple of tracks; th emissing starts and ends can easily added to the .tracks file; just remember to adjust the ‘Number_of_tracks’ setting!

Now, I just wish there ws a quick way to generate the CD text data when ripping the tracks to a CD as this is a right pain with the burning tools, but all in all a good day’s work and I have a fair few album recordings to catch up on.

References was really useful in helping me make sure that the correct fields and their sizes were being used in the WAV header.

Bulding the SD card imge for Arch

For this part of the process, and as a follow-up to my previous post, I will be using the beautifully clear and concise post at

I have two filesystem images that I want to combine into a single 16G image file. The above post mentions kpartx and I made the mistake of thinking that it was a KDE version of parted (!) except that it doesn’t work with the -av options.

A wee bit of digging hinted at kpartx being part of the multipath-tools package, but pacman couldn’t find it.

A quick word with one of my kids (who is more of an Arch whizz than I) suggested that the problem was that multipath-tools is part of the AUR (Arch Use Repository) and can be installed using the command,

$ yaourt -S multipath-tools

Note, as a unprivileged user. So we continue with the process.

# truncate --size 16G r-pi-16gb.img
# fdisk r-pi-16gb.img

With two partitions: 100M and 15.9 G.

# kpartx -av r-pi-16gb.img 
add map loop2p1 (254:0): 0 204800 linear /dev/loop2 2048
add map loop2p2 (254:1): 0 33347584 linear /dev/loop2 206848
# dd if=rpi-arch-root.img of=/dev/mapper/loop2p1 bs=1M
100+0 records in
100+0 records out
# dd if=rpi-arch-ext4.img of=/dev/mapper/loop2p2 bs=1M
dd: error writing ‘/dev/mapper/loop2p2’: No space left on device
16284+0 records in
16283+0 records out
17073963008 bytes (17 GB) copied, 509.39 s, 33.5 MB/s

Not what I was expecting. I have checked my sums and they all seem to match up: the ext4 image is 15.9 GiB (16284 M) and the combined size should match the 16G image file.

A closer check of fdisk does seem to highlight a discrepancy:

  • r-pi-16gb.img2 has 33347584 sectors for 15.9G
  • rpi-arch-ext4.img: 15.9 GiB, 17075011584 bytes with 33349632 sectors

So, perhaps the problem here is because there is an offset at the start, but extending the 16G disk image won’t work because that is then bigger than the size of the SD card. Back to the drawing board and out with the calculator.

Arch image for r-pi from scratch

I’ve decided to brave it and create an install image for Arch Linux to run on am r-pi. And to do it without direct access to the SD card, by creating the filesystems on loopback devices which I will then dd to an image file tht I will copy to the Mac and then burn to the SD card.

No idea if it will work. describes creating two filesystems, one 100MB formatted as VFAT and a 15.9GB ext4 filesystem.

First things first: create the files to be used as the filesystems. This needs a wee bit of arithmetic to convert 15.9GB to KB.

16GB is 1073741824B and we need to subtract 100M, which can be found with the commands,

$ echo '100 * 1024 * 1024'|bc
echo '17179869184 - 104857600' | bc
$ echo '17075011584 / 1024' | bc

This gives 16674816KB as the 15.9GB disk file. Then we can use the truncate command to create the files for the disk images (whatever happened to the mkfile command).

# truncate -s 100M rpi-arch-root.img
# truncate -s 16674816K rpi-arch-ext4.img

Then we create the loopback devices on which we create actual filesystems,

# losetup -f rpi-arch-root.img
# mkfs.vfat /dev/loop0
# losetup -f rpi-arch-ext4.img
# mkfs.ext4 /dev/loop1

Then back to the Arch instructions

# mkdir root boot
# mount /dev/loop0 boot
# mount /dev/loop1 root

Then grab the Arch image files and unpack them on the filesystems as instructed,

# wget
# bsdtar -xpf ArchLinuxARM-rpi-latest.tar.gz -C root
# mv root/boot/* boot

Then we unmount the filesystems and try to figure a way of getting them on to the SD card with a Mac.

# umount /dev/loop0
# umount /dev/loop1

I suspect some nifty footwork with dd coming up to create a 16GB image file that we copy to the Mac and dd to the SD card. More to follow…


I used for some advice on the command to use files as loopback devices even though I created the files differently and the actual losetup commands in the article don’t work. I like to mention the pages I found useful in whatever way.

Farewell r-pi Docker

Looks like I’ll probably have to abandon my adventures with Docker on the r-pi.

Any attempt to to RUN  command (say, apt-get update) in the Dockerfile, I get the following error:

The command '/bin/sh -c 'apt-get update'' returned a non-zero code: 139

This equates to a segmenatation fault, although the command runs fine from a standard shell.

I tried running the docker daemon in debug mode but nothing untoward is reported.

But a quick glance at /var/log/daemon.log shows the following:

systemd[1]: Failed to set cpu.cfs_period_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied
systemd[1]: Failed to set cpu.cfs_quota_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied

At friends at Google then point us at and pretty much everything checks out except for /sys/fs/cgroup/cpu/cpu.cfs_quota_us. Any potential issue for Debian/Jessie should have been fixed a while back.

Oddly, everything including and under /sys/fs/cgroup has a timestamp of 1 Jan 1970 suggesting that something’s not quite right.

This is now happening on two different SD cards on different r-pi’s with different versions of Hypriot; it’s not a Docker issue. Attempting to apply the latest updates gives further breakage. Time to say goodbye and head back to x86.