Monthly Archives: November 2015

Using process to support the team

One of the early joys of my new job is the use of Jira to manage the change control process.

This matters because Jira is quite unobtrusive and fits in naturally with the development process.

Compare this against a previous employer with as bloated a process as could possibly be imagined (there was a category for a P3 emergency) and where low-impact had to wait at least 2 days and with the expectation that teams change how they go about their work to support the process: the process is more important than getting stuff done.

And the managers wonder why the teams hated it.

 

Gramofile reborn

Was intending to spend the day hving fun with Docker on my Arch desktop – the r-pi version still on-hold until I can get a 16GB image file on to the SD card; 24 hours is too long when trying to copy the file over the wireless network – but I got distracted trying to sort out some old clutter from early this century (seriously).

I found a copy of a program I used many moons ago to process some of the digital recordings I had done of some of my old vinyl (prior to processing using Audacity).

Gramofile is  a curses-based program that attempts to split up an recording into different tracks by looking for blocks of silence; it does a reasonable job and its estimates can easily be tweaked.

A quick recompile and it was starting up but was having problems recognising the WAV files I have. I knew it was Gramofile’s problem becasue I was able to use an old program I started years ago to convert WAV files to ZX Spectrum tzx format (yes, I still have plenty of speccy tapes) and it was able to identify the relevant header records.

So, I have spent all day hacking some test programs to get a reasonable header processor going for WAV files. The main problem I suspect is that the code is 32-bit and some of the buffer manipulation looked a wee bit odd. But because the header records sizes are fixed I decided to an explicit copy byte-for-byte from the buffer to the header struct,

...
 memcpy ( &wavhd.main_chunk, hd_buf, 4 );
 memcpy ( &wavhd.length, hd_buf+4, 4 );
 memcpy ( &wavhd.chunk_type, hd_buf+8, 4 );
 ...

Another quick recompile and we’re in business big-time! A run through the first side of Psychocandy picked up 5 out of 7 tracks, just missing a couple of short silence sections between a couple of tracks; th emissing starts and ends can easily added to the .tracks file; just remember to adjust the ‘Number_of_tracks’ setting!

Now, I just wish there ws a quick way to generate the CD text data when ripping the tracks to a CD as this is a right pain with the burning tools, but all in all a good day’s work and I have a fair few album recordings to catch up on.

References

http://soundfile.sapp.org/doc/WaveFormat/ was really useful in helping me make sure that the correct fields and their sizes were being used in the WAV header.

Bulding the SD card imge for Arch

For this part of the process, and as a follow-up to my previous post, I will be using the beautifully clear and concise post at http://serverfault.com/questions/281628/combine-partitions-to-one-disk-image.

I have two filesystem images that I want to combine into a single 16G image file. The above post mentions kpartx and I made the mistake of thinking that it was a KDE version of parted (!) except that it doesn’t work with the -av options.

A wee bit of digging hinted at kpartx being part of the multipath-tools package, but pacman couldn’t find it.

A quick word with one of my kids (who is more of an Arch whizz than I) suggested that the problem was that multipath-tools is part of the AUR (Arch Use Repository) and can be installed using the command,

$ yaourt -S multipath-tools

Note, as a unprivileged user. So we continue with the process.

# truncate --size 16G r-pi-16gb.img
# fdisk r-pi-16gb.img

With two partitions: 100M and 15.9 G.

# kpartx -av r-pi-16gb.img 
add map loop2p1 (254:0): 0 204800 linear /dev/loop2 2048
add map loop2p2 (254:1): 0 33347584 linear /dev/loop2 206848
# dd if=rpi-arch-root.img of=/dev/mapper/loop2p1 bs=1M
100+0 records in
100+0 records out
# dd if=rpi-arch-ext4.img of=/dev/mapper/loop2p2 bs=1M
dd: error writing ‘/dev/mapper/loop2p2’: No space left on device
16284+0 records in
16283+0 records out
17073963008 bytes (17 GB) copied, 509.39 s, 33.5 MB/s

Not what I was expecting. I have checked my sums and they all seem to match up: the ext4 image is 15.9 GiB (16284 M) and the combined size should match the 16G image file.

A closer check of fdisk does seem to highlight a discrepancy:

  • r-pi-16gb.img2 has 33347584 sectors for 15.9G
  • rpi-arch-ext4.img: 15.9 GiB, 17075011584 bytes with 33349632 sectors

So, perhaps the problem here is because there is an offset at the start, but extending the 16G disk image won’t work because that is then bigger than the size of the SD card. Back to the drawing board and out with the calculator.

Arch image for r-pi from scratch

I’ve decided to brave it and create an install image for Arch Linux to run on am r-pi. And to do it without direct access to the SD card, by creating the filesystems on loopback devices which I will then dd to an image file tht I will copy to the Mac and then burn to the SD card.

No idea if it will work.

http://archlinuxarm.org/platforms/armv6/raspberry-pi describes creating two filesystems, one 100MB formatted as VFAT and a 15.9GB ext4 filesystem.

First things first: create the files to be used as the filesystems. This needs a wee bit of arithmetic to convert 15.9GB to KB.

16GB is 1073741824B and we need to subtract 100M, which can be found with the commands,

$ echo '100 * 1024 * 1024'|bc
104857600
echo '17179869184 - 104857600' | bc
17075011584
$ echo '17075011584 / 1024' | bc
16674816

This gives 16674816KB as the 15.9GB disk file. Then we can use the truncate command to create the files for the disk images (whatever happened to the mkfile command).

# truncate -s 100M rpi-arch-root.img
# truncate -s 16674816K rpi-arch-ext4.img

Then we create the loopback devices on which we create actual filesystems,

# losetup -f rpi-arch-root.img
# mkfs.vfat /dev/loop0
# losetup -f rpi-arch-ext4.img
# mkfs.ext4 /dev/loop1

Then back to the Arch instructions

# mkdir root boot
# mount /dev/loop0 boot
# mount /dev/loop1 root

Then grab the Arch image files and unpack them on the filesystems as instructed,

# wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-latest.tar.gz
# bsdtar -xpf ArchLinuxARM-rpi-latest.tar.gz -C root
# mv root/boot/* boot

Then we unmount the filesystems and try to figure a way of getting them on to the SD card with a Mac.

# umount /dev/loop0
# umount /dev/loop1

I suspect some nifty footwork with dd coming up to create a 16GB image file that we copy to the Mac and dd to the SD card. More to follow…

References

I used https://samindaw.wordpress.com/2012/03/21/mounting-a-file-as-a-file-system-in-linux/ for some advice on the command to use files as loopback devices even though I created the files differently and the actual losetup commands in the article don’t work. I like to mention the pages I found useful in whatever way.

Farewell r-pi Docker

Looks like I’ll probably have to abandon my adventures with Docker on the r-pi.

Any attempt to to RUN  command (say, apt-get update) in the Dockerfile, I get the following error:

The command '/bin/sh -c 'apt-get update'' returned a non-zero code: 139

This equates to a segmenatation fault, although the command runs fine from a standard shell.

I tried running the docker daemon in debug mode but nothing untoward is reported.

But a quick glance at /var/log/daemon.log shows the following:

systemd[1]: Failed to set cpu.cfs_period_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied
systemd[1]: Failed to set cpu.cfs_quota_us on /system.slice/var-lib-docker-overlay-...-merged.mount: Permission denied

At friends at Google then point us at https://github.com/opencontainers/runc/issues/57 and pretty much everything checks out except for /sys/fs/cgroup/cpu/cpu.cfs_quota_us. Any potential issue for Debian/Jessie should have been fixed a while back.

Oddly, everything including and under /sys/fs/cgroup has a timestamp of 1 Jan 1970 suggesting that something’s not quite right.

This is now happening on two different SD cards on different r-pi’s with different versions of Hypriot; it’s not a Docker issue. Attempting to apply the latest updates gives further breakage. Time to say goodbye and head back to x86.

Just when I thought it was safe to go down to the Dock(er)

The very innocuous Dockerfile entry

FROM armv7/armhf-ubuntu
RUN apt-get update

is throwing the following error:

The command '/bin/sh -c apt-get update' returned a non-zero code: 139

Now, this is on a completely unadulterated Docker 1.7 install and converting it to,

CMD [ "/bin/dash", "-c", "apt-get", "update" ]

works just fine but there’s no way I’m doing this for multi-line commands installing lots of packages. Changing the shell (from dash) to bash makes no difference; it appears to be that ‘sh -c’ wants the command and all arguments as one string but that isn’t what it’s getting.

So, rather than just getting on with doing the tasks I actually want, I have to chase down some stupid setting or version error.

Maybe I should have switched to Unikernels.

Docker or Unikernel?

After some recent reading, I’m torn between getting back into building Docker images – I lost all my previous builds when the SD card on my r-pi got corrupted and had to be built from scratch – or latest flavour of the month, unikernels.

Well, I was having fun with Docker and my new job will start looking at the tech at some point and it’d be a shame not to press ahead with it.

So, I’ll use https://hub.docker.com/r/armv7/armhf-ubuntu/ as the new base and look at deploying a Rails application as per my original intention.

https://github.com/umiddelb/armhf/wiki/Installing,-running,-using-docker-on-armhf-(ARMv7)-devices is the best guide for getting this working.

More Desktop Linux Loathing

Have found an old 80GB ipod classic that has bout 31GB of orphaned tracks on that can be can be copied to the local disk.

I’m using the guide at https://wiki.archlinux.org/index.php/IPod.

But when it comes to synching the music basic to the ipod, things get nasty. So far I have tried Banshee (crashes with content sync), rhythm box (possibly the worst designed interface of any application, sorry, but it’s awful); amarok (really confusing interface and can’t get past empty ‘transcode’ dropdown list before initializing the ipod and hard to tell whether it’s actually doing anyting); gtkpod (the less said the better, can’t detect the ipod); floola and yamipod packages don’t exist.

Of the bunch, Banshee is the only application that actually makes an attempt to sync content to the device but appears to have just too many errors before the segfault.

Even with all these problems, at least the device is actually detected; my Android phone is a complete blank in Antegros despite lots of MTP shenanigans. Given the number of options listed on the help pages it seems fairly obvious that this is all wing-and-a-prayer stuff.

I’m not saying that al this should be easy, but it shouldn’t be this difficult.

Update: by one means or another, gtkpod managed to copy 74 songs to the ipod. And having made it writeable, Amarok has a sync option to just copy the files rather than transcode them (which is what does for Banshee); all looking quite promising.  But I really wish I knew what it is that I have done to get to this point: it’s all well and good Amarok grabbing the lyrics to songs, but I do wish it had a progress bar telling me how far through copying the 3100 songs it has got..