Category Archives: uwsgi

Dynamic (in-memory) zip file creation

With a recent project I wanted to offer a download of a bunch of files as a ZIP download to make redistribution. I started by thinking that this would involve creating a zip file on disk and then serving it to the client.

It doesn’t.

This method creates an in-memory zip image of a temporary directory in which the required file are created. The directory is deleted before the zip file is returned to the client. Note the use of send_file to return the content and how it allows us to specify the name of the downloaded file.

If running the app under uwsgi there are some precautions to take:

  • import the send_file module from flask (the default wsgi version is broken)
  • Include ‘wsgi-disable-file-wrapper’ in the uwsgi settings
import os
import io
import tempfile
import zipfile
from peewee import *

# A simple function to write a file in a given directory
def makeFile(dir, filename, data):
    fname = str(dir) + '/' + str(filename)
    fh = open(fname, "w")
    return True
Function to create a zip file containing the application configuration files
To create the zip file, we need to create a memory file (BytesIO))
 row: Database row object containing columns for filename and
 content to be written in the form
 { {"file1": "Content for file 1}, {"file2": "Different content for file"} }
 data: BytesIO object containing the zipped files
def mkZipFile(row):
    zipdir = tempfile.mkdtemp(prefix='/tmp/')
    oldpath = os.getcwd()

    jdata = json.loads(row)
    for conf in jdata:
        for f in conf:
            makeFile(zipdir, f, conf[f])

    # Create the in-memory zip image
    data = io.BytesIO()
    with zipfile.ZipFile(data, mode='w') as z:
        for fname in os.listdir("."):

    return data

The route takes the form

Route to create a zip file containing all the required data files
 id: the numeric id of the record in the database
 ZIP: file in ZIP format containing the download files
@app.route('/download/<int:id>.zip', methods=['GET'])
def showZip(id):
    rs = == id).get()

    zfile = mkZipFile(rs)
    if not zfile:
        return Response(response="Invalid zip file", status="400")

    return send_file(



Containing side-channel service monitoring

In a recent post, I produced a simple script that can used to poll a set of ports and propagate their status with a fail-safe (-safe, not -proof, i.e., fails to safety) option to override that status (with an HTTP 503) allowing investigation and safe shutdown of critical services.

As a follow-up, but with no particular significance, this post is an excuse to prepare a Docker container that runs the service. On a Raspberry-Pi.

My approach for this is to start with a base image and build  python container, then use that to prepare a uwsgi container, which can then be used to launch a simple F;ask service.

I use the Hypriot distribution for the Pi and they provide a good Dockerfile to get things started.

# Taken from
# Pull base image
FROM resin/rpi-raspbian:wheezy
MAINTAINER Cleverer Thanme <>

# Install dependencies
RUN apt-get update && apt-get install -y \
 python \
 python-dev \
 python-pip \
 python-virtualenv \
 --no-install-recommends && \
 rm -rf /var/lib/apt/lists/*

# Define working directory

# Define default command
CMD ["bash"]

Then we create the container image with,

docker build --rm=true --tag=localhost/my-user:hpyriot-python

Then we follow that with the wsgi image Dockerfile,

# Based on local python base image 
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hpyriot-python
MAINTAINER Inspired Byothers <>

# Install dependencies
RUN apt-get update && \
 apt-get install -y build-essential python-dev && \
 pip install \
 Flask \
 uwsgi \
 requests && \
 apt-get purge --auto-remove build-essential python-dev

# Define working directory

# Define default command
CMD ["bash"]

With the image coming from,

docker build --rm=true --tag=localhost/my-user:hpyriot-python-wsgi

And the service container is described by,

# Based on local python uwsgi image with Flask pip installed
#FROM resin/rpi-raspbian:wheezy
FROM localhost/my-user:hypriot-python-wsgi
MAINTAINER Thisone Onme <>

# Install application files
ADD /var/www/uwsgi/flask/side-channel-monitor/

# Make a port externally available

# Define working directory
WORKDIR /var/www/uwsgi/flask/side-channel-monitor

# Define default command
CMD ["uwsgi", "--wsgi-file", "/var/www/uwsgi/flask/side-channel-monitor/", "--callable", "app", "--processes", "4", "--threads", "2", "--uid", "nobody", "--gid", "nogroup", "--http", "", "--logto", "/var/log/side-channel-monitor.log"]

And the image appears after,

docker build --rm=true --tag=localhost/my-user:python-wsgi-side-channel

And we can the deploy a container using,

docker run --rm -p 9880:7071 -d --name side-channel-monitor localhost/my-user:python-wsgi-side-channel

This maps the exposed container port (7071) to host port 9880.

And then check the connection with,

curl -v http://localhost:9880/side-channel-monitor/healthcheck

(This obviously requires a backend service for the monitor to connect to).


Although linked to in the post,

Pythonic balancer control

In my day job I work with a lot of services that sit behind a load balancer providing high availability across multiple backend hosts.

Now, inevitably there are times when these services or their hosting servers require maintenance. And sometimes we want to do some investigation and troubleshooting against a running service, but without it taking any live traffic; it simply isn’t practical to try and involve the network team’s assistance with disabling interfaces gracefully. The usual approach is to consider using a server-side firewall to block inbound port access but this can be clumsy and can actually impact live traffic, albeit briefly.

One solution I like is to use an intermediate monitor (or watchdog) service that provides a healthcheck URL for the load balancer, say,, where the returned status is derived from the application ports being monitored.

Now, we want this to be as lightweight as possible, so we can choose something like Python’s Flask and uwsgi (or Ruby Sinatra) to provide a simple service listener like,

#!/usr/bin/env python
from flask import Flask, abort, request, Response, redirect
import os
import requests

app = Flask(__name__)

@app.route('/my-service/healthcheck', methods=["GET"])
def heartbeat():
 resp = Response(response = "OK", status = 200, content_type = "text/plain")

# Now check for the node statuses
 nodes = ( "inbound", "outbound", "stats" )
 for node in nodes:
 req = requests.get("http://localhost:7070/" + node + "/isalive")
 if(req.status_code != 200):
 resp.status = "FAILED" 
 resp.status_code = req.status_code


if __name__ == "__main__":

And obviously I have skipped the setup with pip, virtualenv and the like, but that’s routine enough.

The beauty with this kind of approach is that with a few extra lines before the service port polling we can spoof an outage and allow the load balancer to complete any existing client connections (that the firewall approach will prevent) while marking the node out of action,

 # Check for the maintenance file and signal graceful failure
 resp.status = "Under maintenance. Remove maintenance file when complete"
 resp.status_code = 503

Now, by simple touching a file called maintenance in the directory where the application is run from, the next poll from the load balancer will register the failure, and we can test this with cURL,

$ curl -v http://localhost:7070/my-service/healthcheck
* Trying
* Connected to localhost ( port 7070 (#0)
> GET /my-service/healthcheck HTTP/1.1
> Host: localhost:7070
> User-Agent: curl/7.52.1
> Accept: */*
< Content-Type: text/plain
< Content-Length: 2

Remove the file and the traffic will flow again. Remote control of the load balancer without stopping any services, reboot persistent and allowing us time and space to investigate as we please.