Step 1: Disinterring and Reviving To Life

I shelved/buried my blog when I upgraded to a new Slicehost Linode six years ago. I didn't have it on Github or Gitlab, or on my personal dev machine, and it seemed to be completely purged from my little server. My last hope was the tarball labelled fullbackup.tar.gz in the home dir of the server. Oh, but by the way, my server was out of disk space and I was doing all this while on vacation in rural Nova Scotia, where high speed Internet is still a far-away dream.

Step 0: Bytes. Their Burdensome Storage. Their Inscrutable Movement.

One of the first things I learned was how to get Linux to expand its volume to use up new capacity generously granted to me by Linode. After trying lots of things and fretting about how I hadn't set up lfs and so on, here's what I found out:

  1. You go to the Linode manager site
  2. You click the volume you want to increase
  3. You set the new size (I pushed it up to the new maximum of my plan)
  4. That's it.

Well, I think I had to power off my VM while I did that. I can't remember. Anyway, I'm still at rank Junior Pussywillow when it comes to ops stuff, so I don't understand how the interface works between my Linode VM and the Linode infrastructure. It's not lxc stuff (it predates all that, I think) but Linode boots my VM with a kernel it lets me configure on the Linode side, and it's able to resize not just the partitions of the virtual disks, but actually resize the volume itself. Whatever. Witchcraft.

Anyway, once I had disk space, I still had it in my head that I wanted to untar the mystery backup tarball on my local computer, not on the server, but my server is in New Jersey and my laptop was in the hinterlands and online via a strained WiMax connection. So I decided to use Dropbox.

It turns out Dropbox has a command line client that seems to work pretty well. It even has a separate python script that can tell you how the dropbox daemon is doing once it's up.

So I let the daemon pull my Dropbox's contents to my server (and got a friendly note from Linode that my server was downloading a lot of stuff and they thought I should know). Then I moved my backup tarball in there and waited for a minute or two while the 2.5gb file uploaded. After that I just had to wait about a day for my laptop's Dropbox client to trickle the file down to me.

The backup was just a tarball of the whole dang server. I don't remember why I made that backup, but I'm glad I did, because sure enough my blog code was in there, as were the SQL dumps from the db.

Things That Made Sense Ten Years Ago: I used to deploy by keeping a checkout of my git repo on my servers and pulling/pushing to them from my dev machine. I also used to think I should keep all assets (source images, design mockups, etc.) for a project in a single repo. Setting aside how crazy that seems now, this's two more bits of good fortune. I didn't just have the code and the data, I had the commit history and all the source assets. I don't always approve of 10-years-younger me, but hey, not bad.

me: blows dust off let's see if it still works

Before I did anything else, I created a private gitlab repo for my project and pushed everything up. It took a while to shovel 100s of megabytes of unused astronomy concept photos up over backwater Internet, but it was worth it for the peace of mind before I went ahead and potentially messed it all up.

My plan was to get the site back up first and then try to spruce it up afterwards. The one thing I did do was carve all the source assets out of the repo. I moved them over to a dir that'd be backed up, because I wanted to keep them, but I decided they didn't need to be in version control and they definitely didn't need to be in my blog repo.

Above: Evidence of the purge

It wasn't too hard to get the blog running on my laptop.

Whoops. Backup was from prod, so it was using the prod settings. Past me left me an example settings file for use in local dev. Thanks past me!

Better. And I recognize this one. Good old syncdb!

I had one other mystery error that ended up being to do with my project's dependencies.

Things That Made Sense Ten Years Ago: Because I'd never heard of virtualenv or pip, my projects had minimal dependencies and I kept copies of those dependencies in a lib dir within the project repo. So my blog repo had a copy of every library it used, including all 21 megs of the Django 1.0 checkout. But it gets worse. If the dependencies had compiled elements, like psycopg2, I'd just install them on the system python.

In this case I just had to make myself a little virtualenv and pip install psycopg2, aggdraw, and PIL Pillow.


Next up I just needed to dump everything back onto my server.

Catching Up To 2014

I tried using scp to copy my slimmed down project dir tree back over to my server (minus the .git dir this time), but it was so so painfully slow. And I still had a lot to copy because I was still hauling all of Django around in my repo with me.

Ah well. I decided to leave that for later. In the interim I switched to rsync, and man, rsync is still awesome. It ran like greased lightning, and it wasn't hard to get back into the groove of crafting a baroque rsync command. Here's what I wrote:

rsync --dry-run -avz --progress \
--exclude build/site/pocketuniverse/ \
--exclude build/site/pocketuniverse/localdev.db \
--exclude build/site/hosting/prod/log \
--exclude=".DS_Store" \
--exclude ".pyc" \
--exclude "
.sw?" \
./build sam@<server>:projects/pocketuniverse/

I went looking for my nginx conf file to symlink into the sites-available dir and dang, no nginx conf. This is probably why I turned off my blog.

Things That Made Sense Ten Years Ago: My site deployed with apache and mod_wsgi. At the time apache was old, but it was still the reliable standard, at least to me.

I vaguely remember that my server restructuring was driven by a need to deploy a new site that had new dependencies incompatible with everything else on the site. I decided, back in 2014 that it was time to start using virtualenv, and I think that drove the switch to my new stack: nginx, supervisor, and gunicorn.

Adapting the blog to the new stack wasn't as hard as I'd worried. I just cribbed from my other deployed projects. Here's what I ended up with:

My nginx config file:

server {
listen 80;
return 301 $scheme://$request_uri;
server {
listen 80;
access_log /<project-dir>/log/nginx-access.log;
error_log /<project-dir>/log/nginx-error.log warn;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location /media/ {
alias /<project-dir>/pocketuniverse/media/;

My supervisor config file:

-c /<project-dir>/config/

My gunicorn config file:

workers = 1
bind = 'localhost:8030'
chdir = '/<project-dir>'
accesslog = '/<project-dir>/log/gunicorn-access.log'
errorlog = '/<project-dir>/log/gunicorn-errors.log'

I re-created my postgres user and db, dumped all the contents back in, and surprisingly, the whole thing sort of worked.

And it's via this shuddered-back-to-life blog that I'm posting these very words.


It's a jumble of priorities. Lots of stuff that wants and needs doing. Here's my tentative plan:

  1. Tidy up dir structure - I don't want to have to fight with my old dir structures when I'm making changes, so I'm going to try and sanitize that first
  2. Switch to pipenv - Stop carrying my third party libraries with me
  3. Write some tests
  4. Set up CI on gitlab
  5. Upgrade my dependencies
  6. Bug fixes
  7. Redesign
  8. New Features

How hard could it be?