Monthly Archives: June 2015

logstash on fedora

https://community.ulyaoth.net/threads/how-to-install-logstash-kibana-on-fedora-using-rsyslog-as-shipper.11/

But refer to https://community.ulyaoth.net/threads/ulyaoth-repositories.3/ when trying to find the package to install for F20.

Contrary to previous comments, this process works very nicely and I really appreciate the work that’s gone into the nginx package to use the Ubuntu vhost layout.

The only variation for me was to use port 5000 for the sever,

# firewall-cmd --zone=public --add-port=5000/udp

logstash-forwarder on r-pi

Have just discovered that the digitalocean instructions for logstash-forwarder don’t work on an r-pi: alas, there’s no pre-built ARM package. No matter, very good workaround instructions are available at:

https://github.com/jruby/jruby/issues/1561#issuecomment-67953147

I love it when people go to all the effort and work out all these details for people like me to just walk up and grab it. Thanks.

As an alternative, http://michaelblouin.ca/blog/2015/06/08/build-run-logstash-forwarder-rasperry-pi/, describes a perfect way to get the forwarder installed on an r-pi. The beauty here is that we get an installable package for distribution on other boxes.

Am very aware that I’m feeding off the great work done by others and not actually contributing an awful lot myself.

$ dpkg -i logstash-forwarder_0.4.0_armhf.deb
Selecting previously unselected package logstash-forwarder.
(Reading database ... 64610 files and directories currently installed.)
Unpacking logstash-forwarder (from logstash-forwarder_0.4.0_armhf.deb) ...
Setting up logstash-forwarder (0.4.0) ...
update-rc.d: using dependency based boot sequencing
Logs for logstash-forwarder will be in /var/log/logstash-forwarder/

Linux love and hate

Straight to the point: Linux printing and CUPS has always sucked and still sucks. Every time I consider printing from a Linux desktop I have to ask: have they not yet fixed that mess.

When I send a job to a printer with to use the full size of the page, I don’t expect the print to consist of the page scrunched up as small as possible in the top left corner. Trying to add a printer using CUPS is as horrible now as it ever has been. I really don’t care about all the different possible protocols and whatnot: on Windows and a Mac, printing just works.

In a similar vein, I had fun and games trying to scan a document on a network printer. Now, granted this is quite a pan on Windows and Mac, requiring a Photoshop install to acquire the scan; Gimp on Linux is a non-starter (even with an xsane package). I found a whole bunch of packages (xsane, sane-backends, sane-backends-drivers-scanners) and a reference to /etc/sane.d/epson2.conf but running xsane was still throwing device errors. Doing the basic stuff is so difficult it’s no wonder desktop Linux will never catch on.

I believe the command ought to be,

xsane epson2:net:192.168.1.109

but there’s nowhere I could find an example of the actual command to be used to connect to the scanner. Too much guessing until something works so you don’t exactly what is required for a working solution and are not likely to be able to repeat it seamlessly in the future. At least the scanned document actually matched the original.aIt’s depressing to see that these fundamental desktop operations are as unpleasant now as they were 15 years ago and aren’t looking like they’ll improve anytime in the next 15.

Enabling webcam on Fedora 20

Decided to try and be brave with grabbing a photo to upload to the address book.

dmesg was reporting the following error:

[ 12.101118] uvcvideo: Found UVC 1.00 device <unnamed> (05ca:1839)
[ 12.101567] uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround.
[ 12.101942] uvcvideo: Failed to query (129) UVC probe control : -32 (exp. 26).
[ 12.101945] uvcvideo: Failed to initialize the device (-5).

A quick G-search for ‘uvcvideo sony’ turned up, https://lists.ubuntu.com/archives/kernel-bugs/2010-July/128364.html, and after installing the libusb-devel.i686 and glib-devel.i686 (‘m on a 32-bit laptop) followed by,

r5u87x-loader --reload

does the trick. Installed cheese to grab the image from the webcam.

Note, however, that the following would probably have been a bit simpler,

# yum search uvcvideo
Loaded plugins: langpacks
============================ N/S matched: uvcvideo =============================
libwebcam.i686 : A library for user-space configuration of the uvcvideo driver

Giving up on rails on Docker

Having gone through the process of preparing s Rails image from a ruby build has ballooned out to over 1.2GB even after removing the compiler and associated packages.

I even tested running a bundle update to identify the gems that need native compilation: mysql2, bcrypt, therubyracer.

And then… And then after downloading the application running the bundle update borked when bcrypt 3.1.9 is needed as a dependency instead of the 3.1.10 installed. Given the time it takes to compile gems, it’s not reasonable or sensible to try and stay on top of deploying rails applications to a container.

Okay, so we’ll try to do a WordPress deployment instead.

https://docs.docker.com/compose/wordpress/

Docker image building – a time consuming process

While I have to use the following construct to ensure that the Ruby environment is setup for building gems, and it’s a recommended practice for Docker,

RUN /bin/bash -c "source /usr/local/rvm/scripts/rvm \
 && gem update --system --no-rdoc --no-ri \
 && gem update --no-rdoc --no-ri \
 && gem install --no-rdoc --no-ri bundler \
 && gem install --no-rdoc --no-ri libv8 \
 && gem install --no-rdoc --no-ri mysql2"

One implication of this is that it is an atomic operation and when you discover that the libmysqlclient-dev package is missing and the build needs to be run again, there’s no cache to fall back on. Ruby takes 5 hours on an r-pi, libv8 is 2 hours. This is not quick turnaround stuff for the background build although it will really improve Rails container deployment times. Always a tradeoff.

Enabling rvm in a Dockerfle – The perils of bash vs. sh

The Ruby installer says to run ‘source /usr/local/rvm/scripts/rvm’ as a way to enable the environment for rvm but with Docker this gives an error indicating that ‘source’ is a bash builtin. Using the sh equivalent to run a script, you also get an error,

Step 4 : RUN . /usr/local/rvm/bin/rvm
 ---> Running in 534f12b27222
/bin/sh: 7: /usr/local/rvm/bin/rvm: Syntax error: "(" unexpected (expecting "fi")

This is because the RUN operation uses /bin/sh to execute tasks

RUN /bin/bash -c "source /usr/local/rvm/scripts/rvm \
 && gem update --system --no-rdoc --no-ri \
 && gem update --no-rdoc --no-ri \
 && gem install --no-rdoc --no-ri bundler \
 && gem install --no-rdoc --no-ri libv8 \
 && gem install --no-rdoc --no-ri mysql2"

The whole operation has to be run inside the quotes, you can’t the source as one RUN and the gem commands as individual RUNs, you’ll get a ‘gem: command not found’ error.

Docker and Ruby

Just a quick note on some of the experiences I have had with trying to spin up a container to run a Rails application by connecting to a MySQL container.

Some observations,

  • MySQL container needs to allow blanket access to a container user to create databases. Admittedly, this can be constrained to the private 172.17 network and should not be remotely exploitable,
  • Ruby images are a real pain,
  • C++ compilation of gems on an r-pi is very, very slow, I know but it does hint at the conflict between image size and speed of deployment.
  • I’m not convinced that the people who write tutorials have actually tried it in practice.

MySQL container access

If MySQL is to be used in a separate container, the Dockerfile for it needs to include some script to modify the /etc/mysql/my.cnf (or wherever) file to change the default bind address from 127.0.0.1 to the IP of the running container

ENV SERVICE mysql

and to include a script to do the magic and update my.cnf


RUN mkdir -p /admin/scripts 
COPY scripts/mysql_start.sh /admin/scripts/
RUN chmod 744 /admin/scripts/mysql_start.sh

The script itself looks like,

#!/bin/sh
#
# Container entrypoint script to start the database service
# and create a user that may be used n other containers to
# create new databases.
#
# rubynuby/june 2015
#

# Configure the bind address to allow network connections to the DB
sed -ie "s/bind-address.*/bind-address\t= `hostname -I`/" /etc/mysql/my.cnf

# Start the service and create the user
/usr/sbin/service ${SERVICE} start

echo "GRANT CREATE ON *.* TO '${DBUSER}'@'%' IDENTIFIED BY '${DBPASS};'" | mysql
 -u root --password="${ROOTPW}" mysql

exit 0

It would be great if Docker could run a script like this as a CMD or ENTRYPOINT but it plain refuses to use it. MySQL in Docker is hard.

Ruby Docker images

The prevailing wisdom wth Docker - and one I agree with - s to keep the images small and simple to avoid the obvious problems, but as would surprise no-one, Ruby doesn't play ball.

Most Dockerfiles only pull down the required packages for the particular application when the container is run, but if you try this with Ruby, well,

  • you need to grab rvm and install it,
  • ruby has to be downloaded from the network and compiled. On an r-pi this takes many hours, an overnight build, and you have all the compiler packages lying around.
  • Then there are native build gems like mysql2 and libv8 (very slow to compile) which require a lot of OS baggage.
  • Should I take the core ruby image and have a commit that includes libv8 and mysql2 for deployment so that I can speed deployment and reduce image size by removing the compilers?

Ruby in Docker is hard.

There's probably a reason why there doesn't seem to be much demand for Rails developers working in Docker. These are not natural technology partners.

Container tutorials

Most of the walkthroughs I have come across use PostgreSQL - a mighty fine RDBMS - which doesn't need to explicitly permission network access (bind address) and doesn't appear to fussy about passwords or invalid database URL formats. It's harder wth MySQL in practice.

Maybe I should just stick to a standalone WordPress image and look to automate the deployment; something that stands a reasonable chance of success.