Category Archives: Automation

Automating builds and testing with Jenkins

Now, I’m more than well aware that I’m on a well-trodden path here, but this is more as a reminder of some basic setup for future use.

With previous Rails projects I have undertaken, I have tried to incorporate the testing after a significant amount of the code has been written, making a serious test scenario unfeasible.

With my latest project, however, I’m taking the time to include testing from the get-go. So, with the first few models created I have taken the time – about 50% of the project time thus far – to get a set of successful model and controller tests; there may be a separate posting to cover this.

With testing in place and with GitHub being used as the SCM, I’m in a position to automate the build and test process. I’m thinking of deploying the application to a personal AWS account and building a pipeline to build and deploy after commits.

Anyway, using my Arch Linux desktop machine as the testing server, I installed Jenkins and started the service. I started working from the getting started guide at https://jenkins.io/ but didn’t fancy getting to grips with Groovy and the pipeline.

I found a simple guide for configuring Jenkins to build and deploy a Rails project and decided to see if I could a tangible result.

The Jenkins installation creates a user account with a home directory, /var/lib/jenkins, and to work with a Freestyle project we need a .bashrc to set up the environment for the build script.

COMMISSIONS_USER="mysql-user"
COMMISSIONS_PSWD="mysql-pssword"
COMMISSIONS_HOST="localhost"
COMMISSIONS_DB="test_database"
TEST_SECRET_KEY_BASE="... random.charcters.for.secret.key..."
MYSQL_SOCK="/path/to/mysqld.sock"
export COMMISSIONS_USER COMMISSIONS_PSWD COMMISSIONS_HOST COMMISSIONS_DB TEST_SECRET_KEY_BASE MYSQL_SOCK
echo "Exported Rails and MySQL environment for $LOGNAME"

These are to match the environment references in the database and secrets YAML files from the repository; the last line is helpful for checking the console output.

The build script then becomes the following,

#!/bin/bash

export RAILS_ENV=test
. $HOME/.bashrc

cd . # Force RVM to load the correct Ruby version

/usr/bin/bundle install
/usr/bin/bundle exec rails db:create db:schema:load db:migrate
/usr/bin/bundle exec rails test

Now, because I typically use rvm for the ruby install for my account, the gems required fo the application aren’t generally available. This means that the bundle command will require privilege escalation to install the gems and we need to permission the jenkins account needs an entry or two in /etc/sudoers (which can be removed after the first bundle has completed, but will be required for each new gem).

jenkins ALL=NOPASSWD:/usr/bin/bundle
jenkins ALL=NOPASSWD:/usr/bin/gem
jenkins ALL=NOPASSWD:/usr/bin/rake

And with all that in place we can ask Jenkins to build now and get a clean run and with  daily schedule for builds we can start on the development cycle making sure we continue to test as we go.

References

http://nithinbekal.com/posts/jenkins-rails/ – a basic guide (for Ubuntu) adapted for my setup

Advertisements

Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from https://github.com/hypriot/rpi-python/blob/master/Dockerfile
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <hypriot-user@hypriot.com>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <my-address@domain.com>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <my-user@domin.com>

# Install application files
ADD side-channel-monitor.py /var/www/uwsgi/flask/side-channel-monitor/side-channel-monitor.py

# Make a port externally available
EXPOSE 7071

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/side-channel.py", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "0.0.0.0:7071", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).

References

Although linked to in the post,

Initial upload of local code to repo on github

Another quick note to self ‘cos this is something that I can never remember.

After creating a local repo with the project being developed, we need to create the repo at https://github.com/slugbucket and then run the following commands on teh local workstation.

$ git push --set-upstream https://github.com/slugbucket/automation-demo-packages.git master
$ git remote add origin https://github.com/slugbucket/automation-demo-packages.git
$ git push -u origin master

Enter the github username and password when prompted.

If the repository on github was created with an initial README.md file, it will first be necessary to do a merge with the following command,

$ git pull https://github.com/slugbucket/automation-demo-packages.git

Then the push commands above will wok fine.

Puppet manifests without site.pp

A simple diagram showing a possible flow using a Puppet ENC (External Node Classifier) and remove the need for a site.pp file.

Puppet external node classifier flow

Simple flow of a Puppet ENC to avoid the need for site.pp manifest

This obviously assumes that all your environment can be described using modules and classes and that they can all be referenced via Hiera. The ENC can be any type of program; I might post a sample piece of Ruby or Python later.

Don’t automate a moving target

When looking to automate  or refactor operational processes or even to build a new process, it is tempting to assume that there’s a linear path from idea to completion.

But with immature services or processes it will also be necessary to tweak and refine them or start from scratch. So, it is next to impossible to then expect other team members working on process automation to be able to complete their work effectively in this changing environment.

Designing the automation steps for any given process or service requires that the target be a stable platform. If it can’t be made stable for whatever reason then wait until it is because any automation work will most likely have to be done again.

Puppet is not automation

I seem to be getting an inkling that many companies consider that because they are using (or are planning to use) a tool like Puppet, that they are doing automation (and by extension DevOps).

If only life were that simple. I’m preparing another, more detailed, post on what I believe is missing from that belief and that DevOps is not about the tooling but the culture within an organisation that enables the collaboration.