Category Archives: Automation

Restricted access to EB instance

Almost embarrassed to admit that I spent much of the day trying to figure why my attempts at applying a custom nginx configuration scheme to block access to the editable content on my test site at http://xword-hints.eu-west-1.elasticbeanstalk.com/ were failing because the default Python instance actually runs Apache httpd!

There was certainy enough evidence in the logs but as soon as I figured out how to SSH to the instance, it didn’t take long.

For reference, SSHing to the instance requires that the EC2 key pair be applied to the environment through the Security settings; it’s likely that this can also be done via the CLI.

Checking the EC2 control panel for the instances will give the hostname to use for SSH login; just change the path to the SSH key that has been uploaded to AWS.

$ ssh -i ~/.ssh/private-key ec2-user@ec2-pub-ip-addr-ess.eu-west-1.compute.amazonaws.com

There are  couple of ways of applying the custom configuration needed to restrict access to the editable resources but the method I settled on was by adding the content to the .ebextensions/options.config with an entry in a section called files:

option_settings:
  aws:elasticbeanstalk:application:environment:
    SECRET_KEY: ChangeMe
  aws:elasticbeanstalk:container:python:
    WSGIPath: crossword_hints.py

files:
  /etc/httpd/conf.d/xword-hints-deny.conf:
    mode: 0644
    content: |
      <LocationMatch "/(crossword-solutions|crossword-setters|setter-types|solution-types)/[0-9]+/(edit|delete)">
        Require all denied
      </LocationMatch>
      <LocationMatch "/(crossword-solutions|crossword-setters|setter-types|solution-types)/new">
        Require all denied
      </LocationMatch>
      ErrorDocument 403 /static/403-xword-hints.html

It’s important to ensure that the indentation is correct for the file definition and content; the following deployment error will be thrown if not:

Service:AmazonCloudFormation, Message:[/Resources/AWSEBAutoScalingGroup/Metadata/AWS::CloudFormation::Init/prebuild_0_crossword_hints/files//etc/httpd/conf.d/xword-hints-deny.conf] 'null' values are not allowed in templates

The application needs to includes the 403 document, 403-xword-hints.html, because the web server will pass the request for the custom error page to it as a normal HTTP request.

With all this in place, the application is reasonably safe to leave running on the internet with any attempt to create, edit or delete content yielding a permissions error.

And the updates are still be applied by a Jenkins job pulling branch code from GitHub.

Advertisements

AWS ElasticBeanstalk custom environment variables

As a holiday project I’ve been looking into using Jenkins to deploy code updates from GitHub into an Amazon AWS ElasticBeanstalk instance[1] as an early attempt at some sort of continuous delivery.

One of the features of the Flask application is that it tries to get the SECRET_KEY from an environment variable (although the code for a failsafe value doesn’t work: FIXME). The intention is that the web server environment provides the key at runtime so that different values can be used in each environment.

Now, this AWS page describes the format of the options to apply custom environment settings to an application (the name of the actual file doesn’t matter so long as it is called .config and is found in the .ebextensions directory in the uploaded code):

option_settings:
  aws:elasticbeanstalk:application:environment:
    SECRET_KEY: ChangeMe
  aws:elasticbeanstalk:container:python:
    WSGIPath: crossword_hints.py

Setting the WSGIPath variable means that I can continue to use the original application source file rather than change to the default application.py.

This file can safely be kept in the GitHub repo and setup as a simple shell build step in Jenkins prior to the code upload, thus:

SECRET_KEY=`openssl rand -base64 12`; sed -ie "s/ChangeMe/${SECRET_KEY}/" .ebextensions/options.config

Jenkins has a great AWS EB deploy plugin that uses stored credentials to mange the source bundling, upload and deployment of the application; it’s kinda strange seeing the AWS console page spring into life in response to the Jenkins job running. To save having to include the build shell step, I’m thinking of creating my own version of the plugin that allows the inclusion of custom variables.

[1] – As a development instance the application will be mostly terminated (and offline) because AWS is a very expensive way of running a bit of demo code.

References

Automating builds and testing with Jenkins

Now, I’m more than well aware that I’m on a well-trodden path here, but this is more as a reminder of some basic setup for future use.

With previous Rails projects I have undertaken, I have tried to incorporate the testing after a significant amount of the code has been written, making a serious test scenario unfeasible.

With my latest project, however, I’m taking the time to include testing from the get-go. So, with the first few models created I have taken the time – about 50% of the project time thus far – to get a set of successful model and controller tests; there may be a separate posting to cover this.

With testing in place and with GitHub being used as the SCM, I’m in a position to automate the build and test process. I’m thinking of deploying the application to a personal AWS account and building a pipeline to build and deploy after commits.

Anyway, using my Arch Linux desktop machine as the testing server, I installed Jenkins and started the service. I started working from the getting started guide at https://jenkins.io/ but didn’t fancy getting to grips with Groovy and the pipeline.

I found a simple guide for configuring Jenkins to build and deploy a Rails project and decided to see if I could a tangible result.

The Jenkins installation creates a user account with a home directory, /var/lib/jenkins, and to work with a Freestyle project we need a .bashrc to set up the environment for the build script.

COMMISSIONS_USER="mysql-user"
COMMISSIONS_PSWD="mysql-pssword"
COMMISSIONS_HOST="localhost"
COMMISSIONS_DB="test_database"
TEST_SECRET_KEY_BASE="... random.charcters.for.secret.key..."
MYSQL_SOCK="/path/to/mysqld.sock"
export COMMISSIONS_USER COMMISSIONS_PSWD COMMISSIONS_HOST COMMISSIONS_DB TEST_SECRET_KEY_BASE MYSQL_SOCK
echo "Exported Rails and MySQL environment for $LOGNAME"

These are to match the environment references in the database and secrets YAML files from the repository; the last line is helpful for checking the console output.

The build script then becomes the following,

#!/bin/bash

export RAILS_ENV=test
. $HOME/.bashrc

cd . # Force RVM to load the correct Ruby version

/usr/bin/bundle install
/usr/bin/bundle exec rails db:create db:schema:load db:migrate
/usr/bin/bundle exec rails test

Now, because I typically use rvm for the ruby install for my account, the gems required fo the application aren’t generally available. This means that the bundle command will require privilege escalation to install the gems and we need to permission the jenkins account needs an entry or two in /etc/sudoers (which can be removed after the first bundle has completed, but will be required for each new gem).

jenkins ALL=NOPASSWD:/usr/bin/bundle
jenkins ALL=NOPASSWD:/usr/bin/gem
jenkins ALL=NOPASSWD:/usr/bin/rake

And with all that in place we can ask Jenkins to build now and get a clean run and with  daily schedule for builds we can start on the development cycle making sure we continue to test as we go.

References

http://nithinbekal.com/posts/jenkins-rails/ – a basic guide (for Ubuntu) adapted for my setup

Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from https://github.com/hypriot/rpi-python/blob/master/Dockerfile
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <hypriot-user@hypriot.com>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <my-address@domain.com>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <my-user@domin.com>

# Install application files
ADD side-channel-monitor.py /var/www/uwsgi/flask/side-channel-monitor/side-channel-monitor.py

# Make a port externally available
EXPOSE 7071

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/side-channel.py", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "0.0.0.0:7071", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).

References

Although linked to in the post,

Initial upload of local code to repo on github

Another quick note to self ‘cos this is something that I can never remember.

After creating a local repo with the project being developed, we need to create the repo at https://github.com/slugbucket and then run the following commands on teh local workstation.

$ git push --set-upstream https://github.com/slugbucket/automation-demo-packages.git master
$ git remote add origin https://github.com/slugbucket/automation-demo-packages.git
$ git push -u origin master

Enter the github username and password when prompted.

If the repository on github was created with an initial README.md file, it will first be necessary to do a merge with the following command,

$ git pull https://github.com/slugbucket/automation-demo-packages.git

Then the push commands above will wok fine.

Puppet manifests without site.pp

A simple diagram showing a possible flow using a Puppet ENC (External Node Classifier) and remove the need for a site.pp file.

Puppet external node classifier flow

Simple flow of a Puppet ENC to avoid the need for site.pp manifest

This obviously assumes that all your environment can be described using modules and classes and that they can all be referenced via Hiera. The ENC can be any type of program; I might post a sample piece of Ruby or Python later.

Don’t automate a moving target

When looking to automate  or refactor operational processes or even to build a new process, it is tempting to assume that there’s a linear path from idea to completion.

But with immature services or processes it will also be necessary to tweak and refine them or start from scratch. So, it is next to impossible to then expect other team members working on process automation to be able to complete their work effectively in this changing environment.

Designing the automation steps for any given process or service requires that the target be a stable platform. If it can’t be made stable for whatever reason then wait until it is because any automation work will most likely have to be done again.

Puppet is not automation

I seem to be getting an inkling that many companies consider that because they are using (or are planning to use) a tool like Puppet, that they are doing automation (and by extension DevOps).

If only life were that simple. I’m preparing another, more detailed, post on what I believe is missing from that belief and that DevOps is not about the tooling but the culture within an organisation that enables the collaboration.