Category Archives: Python

Limited time for Django?

Having decided that Rails development had run its course and that the future was all things Python and Django, it looks like I’ll be reining back on the web development front and sticking to simple stuff with Flask and uwsgi.

I started rewriting an application with Django following the standard tutorials and make some progress, even getting as far as a form that can subnit new records.

But, staying true to my original goal of ensuring that testing is a fundamental part of the project, I tried the simplest possible scenario,

from django.utils import timezone
from .models import Commission, Enquiry

# Create your tests here.
class CommissionMethodTests(TestCase):

    def basic_test(self):
        a = 1
        assert a == 1

Nothing could be simpler,

$ python manage.py test commissions
Creating test database for alias 'default'...
System check identified no issues (0 silenced).

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK
Destroying test database for alias 'default'...

When something this simple doesn’t work and gives no error or any kind of output, there’s not a great deal of point trying to continue.

I did find another Django tutorial that suggested using ‘self.asert’, but that just give an error; I’m not a fan of failing at the first fence.

So, while I’m still like writing Python and will continue with Flask and keeping things simple, it’s time to abandon Django and look to see if WordPress plugins are up to the task.

Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from https://github.com/hypriot/rpi-python/blob/master/Dockerfile
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <hypriot-user@hypriot.com>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <my-address@domain.com>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory
WORKDIR /data

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <my-user@domin.com>

# Install application files
ADD side-channel-monitor.py /var/www/uwsgi/flask/side-channel-monitor/side-channel-monitor.py

# Make a port externally available
EXPOSE 7071

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/side-channel.py", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "0.0.0.0:7071", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).

References

Although linked to in the post,

Pythonic balancer control

In my day job I work with a lot of services that sit behind a load balancer providing high availability across multiple backend hosts.

Now, inevitably there are times when these services or their hosting servers require maintenance. And sometimes we want to do some investigation and troubleshooting against a running service, but without it taking any live traffic; it simply isn’t practical to try and involve the network team’s assistance with disabling interfaces gracefully. The usual approach is to consider using a server-side firewall to block inbound port access but this can be clumsy and can actually impact live traffic, albeit briefly.

One solution I like is to use an intermediate monitor (or watchdog) service that provides a healthcheck URL for the load balancer, say, http://192.168.1.100/my-service/healthcheck, where the returned status is derived from the application ports being monitored.

Now, we want this to be as lightweight as possible, so we can choose something like Python’s Flask and uwsgi (or Ruby Sinatra) to provide a simple service listener like,

#!/usr/bin/env python
#
from flask import Flask, abort, request, Response, redirect
import os
import requests

app = Flask(__name__)

@app.route('/my-service/healthcheck', methods=["GET"])
def heartbeat():
 resp = Response(response = "OK", status = 200, content_type = "text/plain")

# Now check for the node statuses
 nodes = ( "inbound", "outbound", "stats" )
 for node in nodes:
 req = requests.get("http://localhost:7070/" + node + "/isalive")
 if(req.status_code != 200):
 resp.status = "FAILED" 
 resp.status_code = req.status_code

return(resp)

if __name__ == "__main__":
 app.run()

And obviously I have skipped the setup with pip, virtualenv and the like, but that’s routine enough.

The beauty with this kind of approach is that with a few extra lines before the service port polling we can spoof an outage and allow the load balancer to complete any existing client connections (that the firewall approach will prevent) while marking the node out of action,

 # Check for the maintenance file and signal graceful failure
 if(os.path.isfile("maintenance")):
 resp.status = "Under maintenance. Remove maintenance file when complete"
 resp.status_code = 503

Now, by simple touching a file called maintenance in the directory where the application is run from, the next poll from the load balancer will register the failure, and we can test this with cURL,

$ curl -v http://localhost:7070/my-service/healthcheck
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 7070 (#0)
> GET /my-service/healthcheck HTTP/1.1
> Host: localhost:7070
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 503 SERVICE UNAVAILABLE
< Content-Type: text/plain
< Content-Length: 2

Remove the file and the traffic will flow again. Remote control of the load balancer without stopping any services, reboot persistent and allowing us time and space to investigate as we please.

Flask over https and local root CA

I recently had a week’s PTO and decided to spend the time getting to grips with setting up a private root CA so that I could try getting a python flask application to use client certificates (signed by the server) for authorization to request upload resources.

It was quite a struggle, more because of the root CA than anything else but since the aim was to gain an understanding of how this kind of set up worked, how to structure the requests and do the work in python (with a ruby version to follow no doubt) I am happy with the progress so far.

I will post a couple of pages describing the work I did with references and how I got past some of the major sticking points.