workings
# a technical notebook
I’ve been working on a short book on the basics of the command
line. This one is something else altogether. It’s a long-form technical
logbook or journal, capturing the outlines of problems and their solutions as I
encounter them. It is dry, incomplete, and sometimes wrong. It is unlikely to
be useful for the general reader. I’m compiling it because I want a record of
what I learn, and I hope that writing documentation will help me do cleaner,
more reproducible work. If it helps people with similar problems along the
way, so much the better.
— bpb / p1k3
/ @brennen
/ ~brennen
# copying
CC BY-SA 4.0
# contents
- a technical notebook
- Wednesday, December 3, 2014
- Friday, December 5, 2014
- Sunday, December 7, 2014
- Monday, December 8, 2014
- Wednesday, December 10, 2014
- Thursday, December 18, 2014
- Friday, December 19, 2014
- Tuesday, December 23, 2014
- Sunday, December 28, 2014
- Saturday, January 3, 2015
- Wednesday, January 7, 2014
- Monday, January 12
- Tuesday, January 13
- Wednesday, January 14, 2015
- Friday, January 16
- Tuesday, January 20
- Thursday, January 22
- Sunday, January 25, 2015
- Tuesday, January 27
- Wednesday, January 28
- Thursday, January 29
- Monday, February 2
- Sunday, February 8
- Monday, March 2
- Thursday, April 9
- Monday, April 20
- Monday, January 18
- tools & toolchains for data munging & analysis
- systemd notes
# Wednesday, December 3, 2014
# makecitizen
{sysops, scripting, adduser, chfn}
Paul Ford sent out an e-mail to the tilde.club waitlist pointing at
~pfhawkins’s list of other tildes, so I’m getting signup requests. There are
enough that I want to write a script for adding a new squiggle.city user. I’m
not determined to be very fancy about this right now; I just want to save some
keystrokes.
The first thing I do is google “adduser”. adduser(1)
is basically just a
front end to useradd(1)
. (This distinction will never stop being confusing,
and should probably be a lesson to anyone considering that naming pattern.) I
learn via Wikipedia that the metadata (name, room number, phone, etc.) which
adduser prompts for is called the
GECOS field, and is a relic of something
called the General Electric Comprehensive Operating System, which ran on some
machines at Bell Labs.
You can change that info with chfn(1)
.
What my script needs to do is:
- create a user with a given
$USERNAME
- generate a random password for the user and tell me
- do
chage -d0 $USERNAME
- put a given public key in
~$USERNAME/.ssh/authorized_keys
You can’t log in to squiggle.city with a password, so why go to the trouble of
setting a random one and forcing users to change it at their first login?
Mostly because users are going to need to know a password for things like
changing their shell or in the case that they get operator privileges one day.
This is what I come up with, after a couple of even dumber iterations:
#!/bin/bash
CITIZEN=$1
KEYSTRING=$2
# Complain and exit if we weren't given a path and a property:
if [[ ! $CITIZEN || ! $KEYSTRING ]]; then
echo "usage: makecitizen <username> <key>"
exit 64
fi
# this should actually check if a _user_ exists,
# not just the homedir
if [ -d /home/$CITIZEN ]; then
echo "$CITIZEN already exists - giving up"
exit 68
fi
PASSWORD=`apg -d -n2`
adduser --disabled-login $CITIZEN
echo "$CITIZEN:$PASSWORD" | chpasswd
chage -d 0 $CITIZEN
echo "$KEYSTRING" >> /home/$CITIZEN/.ssh/authorized_keys
echo "passwd: $PASSWORD"
exit 0
This is used like so:
root@squiggle:~# ./makecitizen jrandomuser "ssh-rsa ..."
It’ll still do adduser
interactively, which is fine for my purposes.
I think this would be improved if it took a fullname and e-mail as input,
and then sent that person a message, or at least output the text of one,
telling them their password.
It’d probably be improved even more than that if it operated in batch mode, was
totally idempotent, and could be driven off some separate file or output
containing the set of users.
(Thoughts like this are how systems like Puppet and Chef are born.)
# Friday, December 5, 2014
# notes on vim
Vim is a text editor. My slowly-evolving configuration can be found on GitHub,
in bpb-kit.
Tyler Cipriani is a lot smarter than I am about vim (and, in
fact, most things), but I am particular and don’t always share his preferences.
# keybindings
I’m starting in on this notebook, which uses a Makefile, and think it might be
nice to have a quick vim keybinding for :make
. I would use F5
, by analogy
to QBasic, but I’ve already bound that to :wall
, which writes all the open
buffers with changes.
I think that maybe <leader>m
, which in my case means ,m
, would be ok. Then
I’m not sure if something is already mapped starting with that, so I try :map
.
I want to search through the list produced by :map
, and think it’d be nice if
I could just read it into a buffer. The first thing I google is “vim read
output of command into file”. This could easily enough give hits for reading
the output of a shell command, but the 3rd thing down the page is
Capture ex command output
on the Vim Tips Wiki.
There are a bunch of interesting ideas there, but the first basic idea is this:
:redir @a
:set all
:redir END
Then you can open a new buffer - :new
- and do "ap
. This says “using the named
register a, paste”.
This seems to work with :set all
, but not so much with :map
. Why not? I skim
:help map
and help redir
without getting very far. Updates to come.
With that digression still unanswered, the mapping I settled on is simple:
nmap <leader>m :make<CR>
I never know if these are going to take with me. The handful of custom
bindings that have actually entered my vocabulary are super-useful, but more
often than not I wind up forgetting about an idea not long after I’ve
implemented it.
# Sunday, December 7, 2014
# notes directory
On organizing todo lists, see the p1k3 entry from August of
2014.
For years now, I’ve kept that sort of thing in a notes.txt
. At some point
notes.txt got its own directory with a haphazard jumble of auxiliary files. It
looks like I turned that directory into a git repository a couple of years ago.
Unlike a lot of what I keep in git, ~/notes/
isn’t meant for any kind of
publication. In fact, it’d be pretty dumb to let it out in the world. So I got
to thinking: I should really encrypt this.
So what’s the best way to encrypt a single directory on Linux?
Two search strings:
- linux encrypted directory
- encrypted git repo
It looks like maybe [http://ecryptfs.org/][eCryptFS] is the thing? This machine’s an
Ubuntu, so let’s see what we can find:
$ apt-cache search ecryptfs
ecryptfs-utils - ecryptfs cryptographic filesystem (utilities)
ecryptfs-utils-dbg - ecryptfs cryptographic filesystem (utilities; debug)
libecryptfs-dev - ecryptfs cryptographic filesystem (development)
libecryptfs0 - ecryptfs cryptographic filesystem (library)
python-ecryptfs - ecryptfs cryptographic filesystem (python)
zescrow-client - back up eCryptfs Encrypted Home or Encrypted Private Configuration
Google suggests that ecryptfs-utils might be what I’m looking for.
I become distracted reading about protests and leave this idea for another day.
# Monday, December 8, 2014
# ssh
I use SSH for damn near everything. We need SSH for damn near everything.
I have this thought that SSH is quite possibly the only end-user-exposed
implementation of acceptable crypto in wide use which actually satisfies the
“actual human beings can use this” constraint at the same time as satisfying
the “this makes your shit relatively secure” constraint. That’s not to say
it’s easy for the average mortal to comprehend, but it beats the shit out of
almost everything else I can think of.
In “almost everything else”, I include SSL/TLS/HTTPS, which sort-of works as
far as the general user population of browsers is concerned, much of the time,
but which is an absolute nightmare to administer and which is a fundamentally
broken design on a political / systems-of-control / economic /
regular-admins-get-this-right level. Arguably, the only thing that has been
worse for the wide adoption of crypto by normal users than SSL/TLS is PGP.
DISCLAIMER: I DON’T KNOW SHIT ABOUT CRYPTO. Tell me how I’m wrong.
✴
# mosh
I’m not exactly sure when mosh started to catch on with people I know, but I’d
say it’s on the order of a year or two that I’ve been aware of it. The basic
thing here is that it’s essentially OpenSSH with better characteristics for a
specific cluster of use cases:
- laggy, high-latency, intermittently-broken network connections
- client machines that frequently hop networks and/or suspend operations
- unreliable VPNs (which is to say very nearly all VPNS in actual use)
# time tracking
I’m about to start in on some remote contracting stuff, so I go looking for a
time tracking tool. For the moment I settle on this little tray widget called
hamster, which looks functional if not
precisely inspiring.
# noobs / raspbian
Last year I did a bunch of work on a Raspberry Pi, but it’s been a few months
since I booted one up. I got a model B+ (more USB ports, various hardware
tweaks, takes a microSD card instead of the full-size one) in my last employee
order at SparkFun, and I’m stepping through what seems to be the stock
recommended installation process.
I torrented NOOBS_v1_3_10.zip
. Be careful unzipping this one - everything is at
the top level of the archive (advice to distributors of basically anything: don’t
do that).
If I’d been smart I probably would have done:
$ mkdir noobs && unzip NOOBS_v1_3_10.zip -d noobs/
The basic system here is “get an SD card, put the stuff in this zip file on the
SD card, put it in the Pi”. Everything about this has always felt kind of
weird (if not actively broken) to me, but it’s probably important to remember
that for most users “put some files on this media” is a lot easier than “image
this media with the filesystem contained in this file”.
✩
So I plug in all the stuff: microSD card, keyboard, HDMI cable to random spare
monitor, power.
Nothing. Well, almost nothing. Blinkenlights, no video output. Red light is
steady, green light blinks a couple of times periodically.
I am reminded that this is, fundamentally, a terrible piece of hardware.
Power down, remove SD card, mount SD card on Linux machine, google variously,
delete and recreate FAT32 partition using gparted, re-copy NOOBS files, unmount
SD card, replace card in Pi, re-apply power.
Green LED flashes spasmodically for a bit then seems mostly off, but is actually
flickering faintly on closer examination. Red light is solid.
This wiki page
suggests this means that no boot code has been executed at all. It’s failing to
read the card, or it’s missing some file, or something is corrupt.
Ok, so, mount SD card on Linux machine again; immediately discover that the
card is now a volume called “SETTINGS”, or seems to be.
$ ls /media/brennen/SETTINGS
lost+found
noobs.conf
$ cat /media/brennen/SETTINGS/noobs.conf
[General]
display_mode=0
keyboard_layout=gb
language=en
brennen@desiderata 15:52:24 /home/brennen ★ sudo parted /dev/mmcblk0
GNU Parted 2.3
Using /dev/mmcblk0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: SD SL16G (sd/mmc)
Disk /dev/mmcblk0: 15.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 823MB 822MB primary fat32 lba
2 826MB 15.9GB 15.1GB extended
3 15.9GB 15.9GB 33.6MB primary ext4
(parted)
Well, obviously something ran, because I definitely didn’t arrange anything
that way. And this seems a little telling:
brennen@desiderata 15:55:36 /home/brennen ★ dmesg | tail -12
[51329.226687] mmc0: card aaaa removed
[51776.154562] mmc0: new high speed SDHC card at address aaaa
[51776.154894] mmcblk0: mmc0:aaaa SL16G 14.8 GiB
[51776.169240] mmcblk0: p1 p2 < > p3
[51781.342106] EXT4-fs (mmcblk0p3): mounted filesystem with ordered data mode. Opts: (null)
[51791.757878] mmc0: card aaaa removed
[51791.773880] JBD2: Error -5 detected when updating journal superblock for mmcblk0p3-8.
[51793.651277] mmc0: new high speed SDHC card at address aaaa
[51793.651601] mmcblk0: mmc0:aaaa SL16G 14.8 GiB
[51793.666335] mmcblk0: p1 p2 < > p3
[51799.516813] EXT4-fs (mmcblk0p3): recovery complete
[51799.518183] EXT4-fs (mmcblk0p3): mounted filesystem with ordered data mode. Opts: (null)
(The “Error -5 detected bit.)
Ok, so I bought a new Sandisk-branded card because I didn’t have a decently
fast microSD card laying around. What I’m going to check before I go any
further is whether I got one the Pi can’t deal with. (Or just one that’s bunk.
I bought this thing for 15 bucks at Best Buy, so who knows.)
Here’s an 8 gig class 4 card, branded Kingston, but I probably got it off the
shelves at SparkFun some time in the last 3 years, so its actual provenance is
anybody’s guess. Looking at what’s on here, I’ve already used it for a
Raspberry Pi of some flavor in the past. Let’s see if it’ll boot as-is.
Ok, no dice. I’m starting to suspect my problem lies elsewhere, but I’ll try
one more time on this card with NOOBS.
Again: No dice.
Also checked:
- the monitor with other inputs, because who knows
- tried a couple of different power supplies - USB cable from my laptop, 5V
wall wart purchased from SFE, cell phone charger.
- the usual plug-things-in-one-at-a-time routine.
✦
Time to try one of these cards with an older RasPi, if I can figure out where I
put any of them.
After much shuffling through stuff on my dining room table / workbench, I find
a model B. It fails in much the same way, which leads me to suspect again that
I’m doing something wrong with the card, but then I can’t quite remember if
this one still worked the last time I plugged it in. They can be fragile
little critters.
Here’s a thought, using a Raspbian image I grabbed much earlier this year:
brennen@desiderata 17:10:03 /home/brennen/isos ★ sudo dd if=/home/brennen/isos/2014-01-07-wheezy-raspbian.img of=/dev/mmcblk0
No dice on either the model B or model B+, using the new SanDisk.
Trying with the older card, dd
spins through 800ish megs before giving me an I/O error.
It may be time to start drinking.
✦
The next day: I swing through a couple of stores in town with the wiki list
of known cards in hand and buy a pile
of cards across a handful of brands, plus a $20 card reader (the Insignia
NS-CR20A1) since there’s not one built in to the laptop I’m carrying today.
The first card I try boots NOOBS instantly; an installer is running as I type
this.
Suddenly It occurs to me that the card reader on the laptop I was using last
night might is likely dying/dead.
This is a really slick install process now, so good work to somebody on that.
# beaglebone black
I’ve got a Beaglebone Black sitting here new in the box. It comes with a USB
cable, so I plug it in. Instantly there are bright blue blinky lights, and my
laptop tells me I’m connected to an ethernet network and I’ve got a new drive
mounted with some README files in it.
This is kind of great.
Browsing to to 192.168.7.2 gets a bunch of docs and a link to Cloud9, an
in-browser IDE that happens to include a root terminal.
I don’t really know what’s going on here. I think it might be a little
scattered and confused as a user experience, in some ways. But it immediately
strikes me as good tech in a bunch of ways.
Josh Datko, who I’ve gotten to know a little bit, has a book called Beaglebone
for Secret Agents. It’s been on my ever-growing to-read list for a while; I’m
going to have to give it a look sooner rather than later.
# reading list
# Wednesday, December 10, 2014
# listusers / squiggle.city repo
There’s now a squigglecity organization on GitHub.
What little is there is a classic duct-tape mess complete with a bunch of
commits made as root, but may contain a few useful bits.
I’m planning to clean up this version of listusers.pl into a
more generic listusers
utility that just outputs TSV and pipe to csvkit / jq
for HTML & JSON.
Oh, right — about the JSON. ~ford proposed a standard tilde.json
kind of
like this, which I think is not a terrible
idea at all though that one’s a bit rough and the format could still use a
little tweaking as of this writing.
This is the kind of thing it’s unbelievably easy to overthink. I’m hoping
we’ll give it enough thought to do a few smart things but not so much thought
that no one actually uses it.
# Thursday, December 18, 2014
# screencast gifs
Looking to make some GIFs of things that happen on my screen, found byzanz
.
$ sudo apt-get install byzanz
byzanz-record -x 1 -y 1 --delay=4 -h 150 -w 700 hello_world.gif
Options:
-x
and -y
set origin of capture on screen
-h
and -w
set height and width to capture
I think I need a more clever way to trigger / manage this than just fiddling
with CLI options, but it works really well and produces lightweight image
files.
I think it would be cool if there were a utility that let me use arrow keys /
hjkl / the mouse cursor to visually select a region of the screen. It could
return x, y, height, and width, then I’d let byzanz handle the capture.
That can’t be the hardest thing in the world to do.
☆
xdotool seems like kind of a
swiss army knife, and has a getmouselocation
command. Theoretically, at
least, you can have it respond to events, including a mouse click. I can’t
quite wrap my head around how this is supposed to work, and my first few
attempts fall flat.
GNU xnee might also be promising, but I
don’t really get anywhere with it.
Eventually I find an
Ask Ubuntu
thread on creating screencast gifs, which points to
xrectsel, a tool for
returning the coordinates and size of a screen region selected with the mouse:
brennen@desiderata 22:06:28 /var/www/workings-book (master) ★ xrectsel "%x %y %w %h"
432 130 718 575%
I wind up with gif_sel
:
#!/usr/bin/env bash
# requires:
# https://github.com/lolilolicon/xrectsel.git
eval `xrectsel "BYZANZ_X=%x; BYZANZ_Y=%y; BYZANZ_WIDTH=%w; BYZANZ_HEIGHT=%h"`
byzanz-record -x $BYZANZ_X -y $BYZANZ_Y --delay=4 -h $BYZANZ_HEIGHT -w $BYZANZ_WIDTH ~/screenshots/screencast-`date +"%Y-%m-%d-%T"`.gif
I’ll probably wind up with a couple of wrappers for this for different lengths
of recording (for starting with dmenu), though it would be nice if I could just
have it record until I press some hotkey.
# Friday, December 19, 2014
{timetracking}
So hamster really doesn’t scratch my particular itch all that well. Rather
than devote any serious brain energy to finding or writing a replacement that
does, I’ve decided to just use a text file.
It looks like the following:
2014-12-17 21:55 - 2014-12-17 11:40
2014-12-18 10:05 - 2014-12-18 12:50
2014-12-18 13:45 - 2014-12-18 16:00
This is just two datetimes for each range of time when I’m working on a given
thing, delimited by / - /
. I just want a quick script to tally the time
represented. (Later, if I need to track more than one project, I’ll expand on
this by adding a project name and/or notes to the end of the line.)
It kind of seems like I should be able to do this with GNU date
, but let’s
find out. Here’re the official examples. This sounds about
right:
To convert a date string to the number of seconds since the epoch (which is
1970-01-01 00:00:00 UTC), use the –date option with the ‘%s’ format. That
can be useful in sorting and/or graphing and/or comparing data by date. The
following command outputs the number of the seconds since the epoch for the
time two minutes after the epoch:
date --date='1970-01-01 00:02:00 +0000' +%s
120
As a test case, I start here:
$ cat ~/bin/timelog
#!/usr/bin/env bash
date --date="$1" +%s
$ timelog '2014-12-17 21:55'
1418878500
Ok, groovy.
I was going to do the rest of this in shell or awk or something, but then I
thought “I should not spend more than 10 minutes on this”, and wrote the following
Perl:
#!/usr/bin/env perl
use warnings;
use strict;
use 5.10.0;
my $total_hours = 0;
# while we've got input from a file/stdin, split it into two datestamps
# and feed that to date(1)
while (my $line = <>) {
chomp($line);
my ($start, $end) = map { get_seconds($_) } split / - /, $line;
my $interval = $end - $start;
my $hours = $interval / 3600;
$total_hours += $hours;
say sprintf("$line - %.3f hours", $hours);
}
say sprintf("%.3f total hours", $total_hours);
sub get_seconds {
my ($stamp) = @_;
my $seconds = `date --date="$stamp" +%s`;
chomp($seconds);
return $seconds;
}
Which gives this sort of output:
brennen@desiderata 14:54:38 /home/brennen/bin (master) ★ timelog ~/notes/some_employer.txt
2014-12-15 13:10 - 2014-12-15 14:35 - 1.417 hours
2014-12-16 10:00 - 2014-12-16 12:55 - 2.917 hours
2014-12-16 14:00 - 2014-12-16 17:15 - 3.250 hours
2014-12-17 15:00 - 2014-12-17 16:51 - 1.850 hours
2014-12-17 21:55 - 2014-12-17 23:40 - 1.750 hours
2014-12-18 10:05 - 2014-12-18 12:50 - 2.750 hours
2014-12-18 13:45 - 2014-12-18 16:00 - 2.250 hours
2014-12-18 17:00 - 2014-12-18 17:30 - 0.500 hours
16.683 total hours
This is me once again being lazy and treating Perl as a way to wrap shell
utilities when I want to easily chop stuff up and do arithmetic. It is many
kinds of wrong to do things this way, but right now I don’t care.
If this were going to be used by anyone but me I would do it in pure-Perl and
make it robust against stupid input.
# drawing tools
Ok, so because I’m starting to poke at drawing again for the first time in
quite a while (even to the extent that I’ll soon be publishing some stuff that
includes cartoon graphics, despite having no idea what I’m doing), I thought
I’d take some rough notes on where I’m at with toolset.
The first thing is that I’m not using any Adobe tools, or indeed any
proprietary software (unless you count the firmware on my cameras and maybe
Flickr) to work with images. I am fully aware that this is a ridiculous
limitation to self-impose, but I want to stick with it as best I can.
For a long time, I’ve sort of fumbled my way through GIMP whenever I needed to
do the kind of light image editing stuff that inevitably comes up in the life
of a web developer no matter how many things you foist off on your
Photoshop-skilled, design-happy coworkers. I think GIMP gets kind of an unfair
rap; it’s a pretty capable piece of software. That said, I’ve still never
really put the time in to get genuinely skilled with it, and it’s not the most
accessible thing for just doodling around.
Several years back, I bought a cheap Wacom tablet.
I was maybe a little optimistic in that writeup, but I still really enjoy
MyPaint. The problem is that, while it’s really
fun for a sketchy/painty/extemperaneous kind of workflow, and dovetails
beautifully with the tablet interface, it deliberately eschews a lot of features
that you start to want for editing an image. I don’t blame its developers for
that — they’re obviously trying to do a certain kind of thing, and constraints
often make for great art — but I’m wondering if I can’t get some of the same
vibe with a tool that also lets me easily cut/copy/scale stuff.
I’m giving Krita a shot with that in mind. It has a real
KDE vibe to it. Lots of modular GUI widgets, menus, etc. A little
bureaucratic. It doesn’t feel as fluid or immediate as MyPaint right out of
the gate, but it’s definitely got more in the way of features. Could grow on
me.
# Tuesday, December 23, 2014
# screenshots
Looking to streamline capture of static screenshots a bit. Options:
gnome-screenshot
- use this already, it’s fine, whatever.
shutter
- weirdness with my xmonad setup? Errors and I don’t feel like taking
the time to find out why.
scrot
- buncha nice command line options
I wind up forking Tyler’s grab,
a nice wrapper for scrot
, which is pretty much what I was going to write anyway.
This is pretty good at defining a region for a static screenshot.
# Sunday, December 28, 2014
# candles & candlemaking
A year ago at Christmastime, I decided to see what kind of candlemaking
supplies were still at my parents' house, and wound up digging a couple of big
Rubbermaid tubs worth of molds, dyes, additives, wick, wax, &c out of the
basement.
I used to do this a lot, but I’ve mostly forgotten the details of technique.
Rough notes:
- Wax temperature when pouring is important. I’m aiming for 210-220 F
with metal molds, but it’s hard to get there with the little hot plate I’m
using. I can usually get it just over 200, according to the thermometer
I’ve got. This doesn’t seem to be doing too much damage, but I do think
the results would be a little better with hotter wax.
- You’re supposed to use a proper double boiler or a purpose-built wax melter.
I put various sizes of can in some water in a medium size pan.
- I remember that I used to melt wax on the woodstove in my dad’s shop, but if
so we must have been running the stove hotter in those days or I had a lot
more patience. it does work well for holding wax at a reasonable
temperature until you have to do a second pour.
- With metal molds, keeping the wax from streaming out the wick hole at the
bottom is often kind of problematic. I think you’re supposed to affix the
wicking with a little screw and put some tacky putty-type stuff over the
screw, but if you’re low on the putty or don’t have just the right size
screw this doesn’t work so great. Things tried this time around: The
remaining putty and then everything kind of smashed down on a wood block
(Ben’s idea), pouring a little wax in the bottom and letting it harden first,
the wrong size screw, silicone caulk. The wood block and the silicone caulk
both worked pretty well.
- You can dye beeswax, but you have to keep in mind that the stuff is already
pretty yellow and opaque. Shades of green work well. Other colors… Well,
I wound up with some the color of a strange weird woodland fungus.
- Last time I did this, I wound up with a bunch of pillars that burned really
poorly and with a small flame. I think I wasn’t using a heavy enough wick.
Tried to go with heavier braided wicking this time. Guess I’ll see how that
pans out.
# Saturday, January 3, 2015
# ipv6
I was hanging out on the internet and heard that imt@protocol.club had set up
club6.nl, a tildebox reachable only over ipv6. I applied
for an account and got one (very speedy turnaround,
~imt).
The next problem was how to connect. I am an utter prole when it comes to
networking. The first thing I remembered was that DigitalOcean optionally
supports ipv6 when creating a new droplet, and sure enough they
also have a guide for enabling it on existing droplets.
TODO: Get my own sites resolving and reachable via ipv6.
# Wednesday, January 7, 2014
# local webservers and static html generation
I haven’t always run an httpd on my main local machine, but I’ve been doing it
again for the last year or two now, and it feels like a major help. I started
by setting up a development copy of display under Apache, then noticed
that it was kind of nice to use it for static files. I’m not sure why it’s any
better than accessing them via the filesystem, except maybe that
localhost/foo
is easier to type than file://home/brennen/something/foo
, but
it has definitely made me better at checking things before I publish them.
(Why Apache? Well, it was easier to re-derive the configuration I needed for
p1k3 things under Apache than write it from scratch under nginx, although one
of these days I may make the leap anyway. I don’t see any reason Perl FastCGI
shouldn’t work under nginx. I also still think Apache has its merits, though
most of my domain knowledge has evaporated over the last few years of doing
mainly php-fpm under nginx.)
I’ve resisted the static blog engine thing for a long time now, but lately my
favorite way to write things is a super-minimal Makefile
, some files in
Markdown, and a little bit of Perl wrapping Text::Markdown::Discount
. I
haven’t yet consolidated all these tools into a single generically reusable
piece of software, but it would probably be easy enough, and I’ll probably go
for it when I start a third book using this approach.
I’d like to be able to define something like a standard book/
dir that would
be to a given text what .git/
is to the working copy of a repo. I suppose
you wouldn’t need much.
book/
authors
title
description
license
toc
toc
would just be an ordered list of files to include as “chapters” from the
root of the project. You’d just organize it however you liked and optionally
use commands like
book add chapter/index.md after other_chapter/index.md
book move chapter/index.md before other_chapter/index.md
to manage it, though really a text editor should be enough. (Maybe I’m
overthinking this. Maybe there should just be a directory full of chapters
sorted numerically on leading digits or something, but I’ve liked being able to
reorder things in an explicit list.)
Before long I might well add handling for some
I should add a feature to Display.pm for outputting all of its content
statically.
# Monday, January 12
# Debian packaging
A lot of time today with
the Debian New Maintainer’s Guide
and google for a project that needs some simple packages.
This is one of those things where the simple cases are simple and then it’s
easy to get lost in a thicket of overlapping mechanisms and terminology.
Thought for providers of technical HOWTOs:
If you’re describing the cumulative assembly of a file structure, provide a
copy (repository, tarball, whatever) of that file structure.
(I should probably take this notion to heart.)
Things to remember:
# MS-DOS / AGT
So I was scrolling through archive.org’s newly-shiny MS-DOS archive (with the
crazy in-browser DOSBOX emulation), trying to think of what to look for.
I found some old friends:
- Crystal Caves
- Commander Keen
- Heretic — still a pretty solid game and maybe my favorite iteration of the Doom Engine
- Rise of the Triads — there is absolutely no way that ROTT actually
looked as bad as this emulation at the time on baseline hardware, but we’ll let
that slide — the graphics may have been better than they show here, but it
was the Duke Nukem property of its moment, which is to say ultimately a
regressive and not-very-consequential signpost on the way to later
developments
And then I got to thinking about the Adventure Game Toolkit, which was this
sort of declarative, not-really-programmable interpreter for simple adventure
games. The way I remember it, you wrote static descriptions of rooms, objects,
and characters. It was a limited system, and the command interpreter was
pretty terrible, but it was also a lot more approachable than things like TADS
for people who didn’t really know how to program anyway. (Like me at the time.)
I’d like to get AGT running on squiggle.city, just because. It turns out
there’s a portable interpreter called AGiliTY, although maybe not
one that’s well packaged. I’ll probably explore this more.
# Tuesday, January 13
# rtd / bus schedules / transit data
I’m taking the bus today, so I got to thinking about bus schedules. I use
Google Calendar a little bit (out of habit and convenience more than any
particular love), and I was thinking “why doesn’t my calendar just know the
times of transit routes I use?”
I thought maybe there’d be, say, iCal (CalDAV? What is actually the thing?)
data somewhere for a given RTD schedule, or failing that, maybe JSON or TSV or
something. A cursory search doesn’t turn up much, but I did find these:
I grabbed that last one.
brennen@desiderata 16:16:43 /home/brennen ★ mkdir rtd && mv google_transit_Jan15_Runboard.zip rtd
brennen@desiderata 16:16:51 /home/brennen ★ cd rtd
brennen@desiderata 16:16:53 /home/brennen/rtd ★ unzip google_transit_Jan15_Runboard.zip
Archive: google_transit_Jan15_Runboard.zip
inflating: calendar.txt
inflating: calendar_dates.txt
inflating: agency.txt
inflating: shapes.txt
inflating: stop_times.txt
inflating: trips.txt
inflating: stops.txt
inflating: routes.txt
Ok, so this is pretty minimalist CSV stuff from the look of most of it.
brennen@desiderata 16:22:12 /home/brennen/rtd ★ grep Lyons stops.txt
20921,Lyons PnR,Vehicles Travelling East, 40.223979,-105.270174,,,0
So it looks like stops have an individual id?
brennen@desiderata 16:24:41 /home/brennen/rtd ★ grep '20921' ./*.txt | wc -l
87
A lot of this is noise, but:
brennen@desiderata 16:26:23 /home/brennen/rtd ★ grep 20921 ./stop_times.txt
8711507,12:52:00,12:52:00,20921,43,,1,0,
8711508,11:32:00,11:32:00,20921,43,,1,0,
8711509,07:55:00,07:55:00,20921,43,,1,0,
8711512,16:41:00,16:41:00,20921,43,,1,0,
8711519,05:37:00,05:37:00,20921,3,,0,1,
8711517,16:47:00,16:47:00,20921,1,,0,1,
8711511,17:58:00,17:58:00,20921,43,,1,0,
8711514,13:02:00,13:02:00,20921,1,,0,1,
8711516,07:59:00,07:59:00,20921,1,,0,1,
8711515,11:42:00,11:42:00,20921,1,,0,1,
8711510,19:10:00,19:10:00,20921,43,,1,0,
8711513,18:05:00,18:05:00,20921,1,,0,1,
8711518,06:47:00,06:47:00,20921,1,,0,1,
brennen@desiderata 16:26:57 /home/brennen/rtd ★ head -1 stop_times.txt
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled
So:
brennen@desiderata 16:41:47 /home/brennen/code/rtd-tools (master) ★ grep ',20921,' ./stop_times.txt | cut -d, -f1,3 | sort -n
8711507,12:52:00
8711508,11:32:00
8711509,07:55:00
8711510,19:10:00
8711511,17:58:00
8711512,16:41:00
8711513,18:05:00
8711514,13:02:00
8711515,11:42:00
8711516,07:59:00
8711517,16:47:00
8711518,06:47:00
8711519,05:37:00
That first number is a trip_id
, the second one departure time. Trips
are provided in trips.txt
:
brennen@desiderata 16:54:56 /home/brennen/code/rtd-tools (master) ★ head -2 trips.txt
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id
0,SA,8690507,Union Station,0, 0 2,793219
I don’t usually use join
very much, but this seems like a logical place for
it. It turns out that join
wants its input sorted on the join field, so I do
this:
brennen@desiderata 16:54:38 /home/brennen/code/rtd-tools (master) ★ sort -t, -k1 stop_times.txt > stop_times.sorted.txt
brennen@desiderata 16:54:38 /home/brennen/code/rtd-tools (master) ★ sort -t, -k3 trips.txt > trips.sorted.txt
And then:
brennen@desiderata 16:51:07 /home/brennen/code/rtd-tools (master) ★ join -t, -1 1 -2 3 ./stop_times.sorted.txt ./trips.sorted.txt | grep 20921
,Y,WK,Lyons PnR,0, Y 16,79481043,,1,0,
,Y,WK,Lyons PnR,0, Y 16,79481043,,1,0,
,Y,WK,Lyons PnR,0, Y 15,79481043,,1,0,
,Y,WK,Lyons PnR,0, Y 41,79480943,,1,0,
,Y,WK,Lyons PnR,0, Y 41,79481043,,1,0,
,Y,WK,Lyons PnR,0, Y 41,79481043,,1,0,
,Y,WK,Boulder Transit Center,1, Y 41,794814
,Y,WK,Boulder Transit Center,1, Y 16,794812
,Y,WK,Boulder Transit Center,1, Y 16,794814
,Y,WK,Boulder Transit Center,1, Y 15,794812
,Y,WK,Boulder Transit Center,1, Y 41,794813
,Y,WK,Boulder Transit Center,1, Y 15,794813
,Y,WK,Boulder Transit Center,1, 206 1,794816
Ok, waitasec. What the fuck is going on here? The string 20921
appears
nowhere in these lines. It takes me too long to figure out that the
text files have CRLF line-endings and this is messing with something in
the chain (probably just output from grep
, since it’s obviously
finding the string). So:
brennen@desiderata 16:59:35 /home/brennen/code/rtd-tools (master) ★ dos2unix *.sorted.txt
dos2unix: converting file stop_times.sorted.txt to Unix format ...
dos2unix: converting file trips.sorted.txt to Unix format ...
Why does dos2unix
operate in-place on files instead of printing to STDOUT?
It beats me, but I sure am glad I didn’t run it on anything especially
breakable. It does do what you’d expect when piped to, anyway, which is
probably what I should have done.
So this seems to work:
brennen@desiderata 17:04:45 /home/brennen/code/rtd-tools (master) ★ join -t, -1 1 -2 3 ./stop_times.sorted.txt ./trips.sorted.txt | grep 20921
8711507,12:52:00,12:52:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 16,794810
8711508,11:32:00,11:32:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 16,794810
8711509,07:55:00,07:55:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 15,794810
8711510,19:10:00,19:10:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 41,794809
8711511,17:58:00,17:58:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 41,794810
8711512,16:41:00,16:41:00,20921,43,,1,0,,Y,WK,Lyons PnR,0, Y 41,794810
8711513,18:05:00,18:05:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 41,794814
8711514,13:02:00,13:02:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 16,794812
8711515,11:42:00,11:42:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 16,794814
8711516,07:59:00,07:59:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 15,794812
8711517,16:47:00,16:47:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 41,794813
8711518,06:47:00,06:47:00,20921,1,,0,1,,Y,WK,Boulder Transit Center,1, Y 15,794813
8711519,05:37:00,05:37:00,20921,3,,0,1,,Y,WK,Boulder Transit Center,1, 206 1,794816
Which seems kind of right for the South &
Northbound schedules, but they’re weirdly intermingled. I think
this pulls departure time and a direction_id
field:
brennen@desiderata 17:15:12 /home/brennen/code/rtd-tools (master) ★ join -t, -1 1 -2 3 ./stop_times.sorted.txt ./trips.sorted.txt | grep 20921 | cut -d, -f3,13 | sort -n
05:37:00,1
06:47:00,1
07:55:00,0
07:59:00,1
11:32:00,0
11:42:00,1
12:52:00,0
13:02:00,1
16:41:00,0
16:47:00,1
17:58:00,0
18:05:00,1
19:10:00,0
So southbound, I guess:
brennen@desiderata 17:15:59 /home/brennen/code/rtd-tools (master) ★ join -t, -1 1 -2 3 ./stop_times.sorted.txt ./trips.sorted.txt | grep 20921 | cut -d, -f3,13 | grep ',1' | sort -n
05:37:00,1
06:47:00,1
07:59:00,1
11:42:00,1
13:02:00,1
16:47:00,1
18:05:00,1
This should probably be where I think oh, right, this is a Google spec - maybe
there’s already some tooling. Failing
that, slurping them into SQLite or something would be a lot less painful. Or
at least using csvkit.
# Wednesday, January 14, 2015
On making a web page remind me of a quality I never fully appreciated in
HyperCard.
So I generally am totally ok with scrolling on web pages. I think in
fact it’s a major advantage of the form.
Then again, I just got to indulging a few minutes of thinking about
HyperCard, and I think that this time rather than read the same old
articles about its ultimate doom over and over again, maybe I should do
something by way of recreating part of it that was different from the
web in general.
The web has plenty of stupid carousels and stuff, but despite their example I’m
curious whether HyperCard’s stack model could still hold up as an idea. I was
never sure whether it was the important thing or not. It was so obviously and
almost clumsily a metaphor. (A skeuomorphism which I have never actually
seen anyone bag on when they are playing that game, perhaps because Designer
Ideologues know there’s not much percentage in talking shit about HyperCard.)
Here is some JavaScript to start:
$('article').each(function (i, a) {
$(a).hide();
});
$('article').first().show();
I’ll spare you the usual slow-composition narrative of where I go from here,
and jump straight to my eventual first-pass solution.
(Ok, actually I just repurposed a terrible thing I did for some slides a while
back, after recreating about 75% without remembering that I had already written
the same code within the last couple of months. It’s amazing how often that
happens, or I guess it would be amazing if my short term memory weren’t so
thoroughly scrambled from all the evil living I do.)
# Friday, January 16
Wireless configuration under Raspbian.
# Tuesday, January 20
I wanted to figure out where I used a library in existing code.
This is what I wound up doing in zsh:
brennen@exuberance 11:48:07 /home/brennen/code $ for foo in `ls -f`; do; if [[ -d $foo/.git ]]; then cd $foo; echo '--' $foo '--'; git grep 'IPC::System::Simple'; cd ~/code; fi; done
-- thcipriani-dotfiles --
-- sfe-sysadmin --
-- pi_bootstrap --
-- bpb-kit --
-- batchpcb --
-- according-to-pete --
-- meatbags --
-- sfe-paleo --
-- instruct --
-- sfe-openstack --
-- YouTube_Captions --
-- batchpcb_rails --
-- userland-book --
slides/render.pl:use IPC::System::Simple qw(capturex);
-- sfe-custom-queries --
-- brennen-sparklib-fork --
-- tilde.club --
-- display --
-- sfe-chef --
-- xrectsel --
-- git-feed --
git-feed:use IPC::System::Simple qw(capturex);
sample_feed.xml: use IPC::System::Simple qw(capturex);
sample_feed.xml:+use IPC::System::Simple qw(capturex);
-- reddit --
-- rtd-tools --
-- sparkfun --
-- mru --
Lame-ish, but I’m perpetually forgetting shell loop and conditional syntax, so
it seems worth making a note of.
# Thursday, January 22
# deleting files from git history
Working on a project where we included some built files that took up a bunch of
space, and decided we should get rid of those. The git repository isn’t public
yet and is only shared by a handful of users, so it seemed worth thinking about
rewriting the history a bit.
There’s reasonably good documentation for this in the usual places if you look,
but I ran into some trouble.
First, what seemed to work: David Underhill has a good short script from
back in 2009 for using git filter-branch
to eliminate particular files from
history:
I recently had a need to rewrite a git repository’s history. This isn’t
generally a very good idea, though it is useful if your repository contains
files it should not (such as unneeded large binary files or copyrighted
material). I also am using it because I had a branch where I only wanted to
merge a subset of files back into master (though there are probably better
ways of doing this). Anyway, it is not very hard to rewrite history thanks to
the excellent git-filter-branch tool which comes with git.
I’ll reproduce the script here, in the not-unlikely event that his writeup goes
away:
#!/bin/bash
set -o errexit
# Author: David Underhill
# Script to permanently delete files/folders from your git repository. To use
# it, cd to your repository's root and then run the script with a list of paths
# you want to delete, e.g., git-delete-history path1 path2
if [ $# -eq 0 ]; then
exit 0
fi
# make sure we're at the root of git repo
if [ ! -d .git ]; then
echo "Error: must run this script from the root of a git repository"
exit 1
fi
# remove all paths passed as arguments from the history of the repo
files=$@
git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD
# remove the temporary history git-filter-branch otherwise leaves behind for a long time
rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune
A big thank you to Mr. Underhill for documenting this one. filter-branch
seems really powerful, and not as brain-hurting as some things in git land.
The docs are currently pretty good, and worth a read if you’re trying to
solve this problem.
Lets you rewrite Git revision history by rewriting the branches mentioned in
the , applying custom filters on each revision. Those
filters can modify each tree (e.g. removing a file or running a perl rewrite
on all files) or information about each commit. Otherwise, all information
(including original commit times or merge information) will be preserved.
After this, things got muddier. The script seemed to work fine, and after
running it I was able to see all the history I expected, minus some troublesome
files. (A version with --prune-empty
added to the git filter-branch
invocation got rid of some empty commits.) But then:
brennen@exuberance 20:05:00 /home/brennen/code $ du -hs pi_bootstrap
218M pi_bootstrap
brennen@exuberance 20:05:33 /home/brennen/code $ du -hs experiment
199M experiment
That second repo is a clone of the original with the script run against it.
Why is it only tens of megabytes smaller, when minus the big binaries I zapped,
it should come in somewhere under 10 megs?
I will spare you, dear reader, the contortions I went through arriving at a
solution for this, partially because I don’t have the energy left to
reconstruct them from the tattered history of my googling over the last few
hours. What I figured out was that for some reason, a bunch of blobs were
persisting in a pack file, despite not being referenced by any commits, and no
matter what I couldn’t get git gc
or git repack
to zap them.
I more or less got this far with commands like:
brennen@exuberance 20:49:10 /home/brennen/code/experiment2/.git (master) $ git count-objects -v
count: 0
size: 0
in-pack: 2886
packs: 1
size-pack: 202102
prune-packable: 0
garbage: 0
size-garbage: 0
And:
git verify-pack -v ./objects/pack/pack-b79fc6e30a547433df5c6a0c6212672c5e5aec5f > ~/what_the_fuck
…which gives a list of all the stuff in a pack file, including
super-not-human-readable sizes that you can sort on, and many permutations of
things like:
brennen@exuberance 20:49:12 /home/brennen/code/experiment2/.git (master) $ git log --pretty=oneline | cut -f1 -d' ' | xargs -L1 git cat-file -s | sort -nr | head
589
364
363
348
341
331
325
325
322
320
…where cat-file
is a bit of a Swiss army knife for looking at objects, with
-s
meaning “tell me a size”.
(An aside: If you are writing software that outputs a size in bytes, blocks,
etc., and you do not provide a “human readable” option to display this in
comprehensible units, the innumerate among us quietly hate your guts. This is
perhaps unjust of us, but I’m just trying to communicate my experience here.)
And finally, Aristotle Pagaltzis’s script for figuring out which commit
has a given blob (the answer is fucking none of them, in my case):
#!/bin/sh
obj_name="$1"
shift
git log "$@" --pretty=format:'%T %h %s' \
| while read tree commit subject ; do
if git ls-tree -r $tree | grep -q "$obj_name" ; then
echo $commit "$subject"
fi
done
Also somewhere in there I learned how to use git bisect
(which is
really cool and likely something I will use again) and went through and made
entirely certain there was nothing in the history with a bunch of big files
in it.
So eventually I got to thinking ok, there’s something here that is keeping
these objects from getting expired or pruned or garbage collected or whatever,
so how about doing a clone that just copies the stuff in the commits that still
exist at this point. Which brings us to:
brennen@exuberance 19:03:08 /home/brennen/code/experiment2 (master) $ git help clone
brennen@exuberance 19:06:52 /home/brennen/code/experiment2 (master) $ cd ..
brennen@exuberance 19:06:55 /home/brennen/code $ git clone --no-local ./experiment2 ./experiment2_no_local
Cloning into './experiment2_no_local'...
remote: Counting objects: 2874, done.
remote: Compressing objects: 100% (1611/1611), done.
remote: Total 2874 (delta 938), reused 2869 (delta 936)
Receiving objects: 100% (2874/2874), 131.21 MiB | 37.48 MiB/s, done.
Resolving deltas: 100% (938/938), done.
Checking connectivity... done.
brennen@exuberance 19:07:15 /home/brennen/code $ du -hs ./experiment2_no_local
133M ./experiment2_no_local
brennen@exuberance 19:07:20 /home/brennen/code $ git help clone
brennen@exuberance 19:08:34 /home/brennen/code $ git clone --no-local --single-branch ./experiment2 ./experiment2_no_local_single_branch
Cloning into './experiment2_no_local_single_branch'...
remote: Counting objects: 1555, done.
remote: Compressing objects: 100% (936/936), done.
remote: Total 1555 (delta 511), reused 1377 (delta 400)
Receiving objects: 100% (1555/1555), 1.63 MiB | 0 bytes/s, done.
Resolving deltas: 100% (511/511), done.
Checking connectivity... done.
brennen@exuberance 19:08:47 /home/brennen/code $ du -hs ./experiment2_no_local_single_branch
3.0M ./experiment2_no_local_single_branch
What’s going on here? Well, git clone --no-local
:
--local
-l
When the repository to clone from is on a local machine, this flag
bypasses the normal "Git aware" transport mechanism and clones the
repository by making a copy of HEAD and everything under objects and
refs directories. The files under .git/objects/ directory are
hardlinked to save space when possible.
If the repository is specified as a local path (e.g., /path/to/repo),
this is the default, and --local is essentially a no-op. If the
repository is specified as a URL, then this flag is ignored (and we
never use the local optimizations). Specifying --no-local will override
the default when /path/to/repo is given, using the regular Git
transport instead.
And --single-branch
:
--[no-]single-branch
Clone only the history leading to the tip of a single branch, either
specified by the --branch option or the primary branch remote’s HEAD
points at. When creating a shallow clone with the --depth option, this
is the default, unless --no-single-branch is given to fetch the
histories near the tips of all branches. Further fetches into the
resulting repository will only update the remote-tracking branch for
the branch this option was used for the initial cloning. If the HEAD at
the remote did not point at any branch when --single-branch clone was
made, no remote-tracking branch is created.
I have no idea why --no-local
by itself reduced the size but didn’t really do
the job.
It’s possible the lingering blobs would have been garbage collected
eventually, and at any rate it seems likely that in pushing them to a remote
repository I would have bypassed whatever lazy local file copy operation was
causing everything to persist on cloning, thus rendering all this
head-scratching entirely pointless, but then who knows. At least I understand
git file structure a little better than I did before.
For good measure, I just remembered how old much of the software on this
machine is, and I feel like kind of an ass:
brennen@exuberance 21:20:50 /home/brennen/code $ git --version
git version 1.9.1
This is totally an old release. If there’s a bug here, maybe it’s fixed by
now. I will not venture a strong opinion as to whether there is a bug. Maybe
this is entirely expected behavior. It is time to drink a beer.
# postscript: on finding bugs
The first thing you learn, by way of considerable personal frustration and
embarrassment, goes something like this:
Q: My stuff isn’t working. I think there is probably a bug in this mature
and widely-used (programming language | library | utility software).
A: Shut up shut up shut up shut up there is not a bug. Now go and figure
out what is wrong with your code.
The second thing goes something like this:
Oh. I guess that’s actually a bug.
Which is to say: I have learned that I’m probably wrong, but sometimes I’m
also wrong about being wrong.
# Sunday, January 25, 2015
# background colors for tmux
I’m logged into too many machines. I make an effort to have prompt colors differ
between hosts, but tmux is the same everywhere.
You can do this sort of thing:
brennen@exuberance 11:54:43 /home/brennen/code $ cat ~/.tmux.conf
# Set window notifications
setw -g monitor-activity on
set -g visual-activity on
set -g status-bg blue
set -g status-fg white
…where status-bg
and status-fg
are colors for the status bar.
It seems like there may be ways to conditionalize this, but at this point I’m
tempted to just pull some simple templating system into my dotfile
stuff and generate a subset of config files on a per-host basis.
# Tuesday, January 27
# what version of what linux distribution is this?
Some luck may be had with one or more of:
root@beaglebone:~# uname -a
Linux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/Linux
root@beaglebone:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 7.8 (wheezy)
Release: 7.8
Codename: wheezy
root@beaglebone:~# cat /etc/debian_version
7.8
root@beaglebone:~# cat /etc/dogtag
BeagleBoard.org BeagleBone Debian Image 2014-04-23
root@beaglebone:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
NAME="Debian GNU/Linux"
VERSION_ID="7"
VERSION="7 (wheezy)"
ID=debian
ANSI_COLOR="1;31"
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support/"
BUG_REPORT_URL="http://bugs.debian.org/"
# armhf
Is it armhf or armel?:
During diagnosis, the question becomes, how can I determine whether my Linux
distribution is based on armel or armhf? Turns out this is not as
straightforward as one might think. Aside from experience and anecdotal
evidence, one possible way to ascertain whether you’re running on armel or
armhf is to run the following obscure command:
$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args
If the Tag_ABI_VFP_args tag is found, then you’re running on an armhf system.
If nothing is returned, then it’s armel. To show you an example, here’s what
happens on a Raspberry Pi running the Raspbian distribution:
pi@raspberrypi:~$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args
Tag_ABI_VFP_args: VFP registers
This indicates an armhf distro, which in fact is what Raspbian is. On the
original, soft-float Debian Wheezy distribution, here’s what happens:
pi@raspberrypi:~$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args
Nothing returned indicates that this is indeed armel.
On a recent-ish Beaglebone Black:
root@beaglebone:~# readelf -A /proc/self/exe | grep Tag_ABI_VFP_args
Tag_ABI_VFP_args: VFP registers
# Wednesday, January 28
# on replicating process
Ok, so here we are. It’s 2015. The gold standard for explaining how you
solved a technical problem to the internet at large is a blog post with things
you can copy and paste or maybe some pictures.
If you’re really lucky, someone actually has reusable a public repository of
some kind. If you’re really lucky, their code works, and if all the gods
are smiling on you at once, their code is documented.
It seems to me that we can do better than this. We possess a great many of the
right tools to do better than this, at least for a lot of common problems.
What does it take to make a given workflow both repeatable and legible to
people without the context we have for a given thing (including ourselves)?
Writing about it is surely desirable, but how do you approach a problem so
that, instead of being scattered across your short term memory and a dozen
volatile buffers, your work becomes a kind of document unto itself?
This is the (beautiful) root of what version control does, after all: It
renders a normally-invisible process legible, and in its newfound legibility,
at least a little susceptible to transmission and reuse.
What do I know works well for transmitting process and discovery, as far as it
goes?
- version control (so really git, which is severally horrible but also
brilliant and wins anyway)
- Makefiles (except that I don’t understand make at all)
- shell scripts (except that shell programming is an utter nightmare)
- Debian packages (which are more or less compounded of the above, and
moderately torturous to build)
- IRC, if you keep logs, because it’s amazing how much knowledge is most purely
conveyed in the medium of internet chat
- Stackoverflow & friends (I hate this, but there it is, it’s a fact, we have to
deal with it no matter how much we hate process jockies, just like Wikipedia)
- screenshots and screencasts (a pain to make, all-too-often free of context, and
yet)
Here are some things that I think are often terrible at this stuff despite
their ubiquity:
- mailing lists (so bad, so routinely pathological, so utterly necessary to
everything)
- web forums like phpBB and stuff (so bad, so ubiquitous, so going to show up
in your google results with the hint you desperately needed, but only if you’re
smart enough to parse it out of the spew)
Here’s one problem: There are a lot of relatively painless once you know them
tools, like “let’s just make this a dvcs repo because it’s basically free”,
that if you know they exist and you really want to avoid future suffering you
just get in the habit of using by default. But most people don’t know these
tools exist, or that they’re generally applicable tools and not just
specialist things you might use for the one important thing at your job because
somebody said you should.
# what makes programming hard?
- Most of the existing programs.
- Most of the existing programming languages.
- Other programmers.
- Human thought is brutally constrained in understanding complex systems.
- Ok you wrote some programs anyway now GOTO 0.
# debian packaging again
I’m starting here again.
# vagrant
Vagrant is a thing for quickly provisioning / tearing down / connecting to
virtual machines. It wraps VirtualBox, among other providers. I think the
basic appeal is that you get cheap, more-or-less disposable environments with a
couple of commands, and there’s scaffolding for simple scripts to configure a
box when it’s brought up, or share directories with the host filesystem. It’s
really lightweight to try out.
Go to the downloads page and install from
there. I used the 64 bit Ubuntu .deb.
$ sudo apt-get install virtualbox
$ sudo dpkg -i vagrant_1.7.2_x86_64.deb
$ mkdir vagrant_test
$ cd vagrant_test
$ vagrant init hashicorp/precise32
$ vagrant up
$ vagrant ssh
This stuff takes a while on the first run through, but is generally really
slick. hashicorp/precise32
is more or less just a preconfigured image pulled
from a central repository.
Their Getting Started is pretty
decent.
People around me have been enthusing about this kind of thing for ages, but I
haven’t really gotten around to figuring out why I should care until recently.
I will probably be using this tool for a lot of development tasks.
Other notes:
# Thursday, January 29
# raspberry pi kernels
# Monday, February 2
# kernel-o-matic & pi finder
Published Friday:
Published a week or so before:
These have been a lot of my working time these last couple weeks, an
overlapping set of projects aimed at making the Pi (and eventually other
single-board computers) more usable and Adafruit’s hardware and tutorials both
more accessible. This has been frustrating and rewarding by turns. I’m trying
to reduce the complexity of a domain I just barely understand, in a lot of
ways, which may be a good summary of software development in general.
Vagrant is something I should have paid attention
to sooner. The interfaces to virtualization are finally starting to overcome
my innate laziness on the whole question.
I just booted a Windows XP box for some reason. It made that noise. You know
the one.
# raspberry pi 2
Announced today:
Expect this to prove interesting. I’ve been having a lot of conversations
about the relative merits of small computing systems, and while this can hardly
be said to address all the complaints you might have about the Pi, boosting
processor and RAM will do a lot for practical usability.
# telling composer to ignore php version requirements
Using Composer to set up a little project, I run
into the problem that the locally-installed PHP is a bit behind the times.
brennen@exuberance 0:09:32 /home/brennen/code/project $ ./composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.
Problem 1
- sparkfun/sparklib 1.1.9 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.8 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.7 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.6 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.5 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.4 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.3 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.2 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.11 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.10 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.1 requires php >=5.5.17 -> no matching package found.
- sparkfun/sparklib 1.1.0 requires php >=5.5.17 -> no matching package found.
- Installation request for sparkfun/sparklib ~1.1 -> satisfiable by sparkfun/sparklib[1.1.0, 1.1.1, 1.1.10, 1.1.11, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.1.9].
Potential causes:
- A typo in the package name
- The package is not available in a stable-enough version according to your minimum-stability setting
see <https://groups.google.com/d/topic/composer-dev/_g3ASeIFlrc/discussion> for more details.
Read <http://getcomposer.org/doc/articles/troubleshooting.md> for further common problems.
Well, ok. I wrote a lot of that code, and I’m pretty sure nothing I want out
of it will break under a slightly stale PHP. I check ./composer.phar help install
,
and sure enough, there’s an option to ignore this requirement:
brennen@exuberance 0:13:21 /home/brennen/code/project $ ./composer.phar install --ignore-platform-reqs
Loading composer repositories with package information
Installing dependencies (including require-dev)
- Installing sparkfun/sparklib (1.1.11)
Downloading: 100%
Writing lock file
Generating autoload files
I never used to quite get the “install an executable utility script in the root
directory of your project” thing, but the whole paradigm is growing on me a little
as my projects accumulate little Makefiles and shell scripts to render HTML,
publish revisions, or deploy packages.
# Sunday, February 8
# systemd & fsck
I just hit my first real frustration with systemd, which is running on the
Novena. The default storage here is a microSD card, and I’ve had to
force-reboot this thing enough times that I’d like to run fsck
on the
root filesystem.
It used to be that you could call shutdown -F
to force an fsck on boot.
The old aliases still exist, but I think the thing I’m supposed to do here
is systemctl reboot
, and there doesn’t seem to be an analogous pattern any
more.
On the other hand, some of the choices immediatly evident in the design of
systemctl
and journalctl
seem interesting and not without merit.
# Monday, March 2
# python
Significant whitespace isn’t exactly a disaster, but on balance still feels
to me like it causes more problems than it solves: Copy & paste headaches,
editor hassles, etc.
# Thursday, April 9
# CGI::Fast and multi_param()
A little while ago, changes were made to Perl’s CGI.pm because of a class
of exploits arising from calling param()
in list context.
I had code in a wrapper for Display that called param()
in list context
deliberately:
# Handle input from FastCGI:
while (my $query = CGI::Fast->new) {
my @params = $query->param('keywords');
print $d->display(@params);
}
In due course, I started getting warnings about calling param()
in list context.
They looked sort of like this:
brennen@exuberance 18:46:13 /home/brennen/www (master) ★ perl display.fcgi 2>&1 | head -1
CGI::param called in list context from package main line 38, this can lead to vulnerabilities. See the warning in "Fetching the value or values of a single named parameter" at /usr/local/share/perl/5.20.1/CGI.pm line 408.
Problematic, since a variable containing that list is exactly what I want. On
googling, I found that in addition to the warning, CGI.pm had been amended to
include multi_param()
for the cases where you explicitly want a list.
Ok, cool, I’ll use that.
Fast forward to just now. display.fcgi
is blowing up on my local machine. Why?
[Thu Apr 09 18:28:29.606663 2015] [fcgid:warn] [pid 13984:tid 140343326992128] [client 127.0.0.1:41335] mod_fcgid: stderr: Undefined subroutine CGI::Fast::multi_param
Well, ok, I upgraded Ubuntu a while back. Maybe I need to reinstall CGI::Fast
from CPAN because the Ubuntu packages aren’t up to date. So:
$ sudo cpan -i CGI::Fast
No dice. What am I missing here? Oh, right. CGI::Fast inherits from CGI.pm.
$ sudo cpan -i CGI
Golden.
Granted, I should probably stop using CGI.pm altogether.
# Monday, April 20
# getting recent posts from pinboard machine-readably
I’ve been experimenting again with using Pinboard to track links of interest,
and thought that maybe it’d be a good idea to use these to add a linkblog back
to p1k3.
First I thought ok, there’s probably an API, which, sure enough, is
true. Here’s a one-liner that will grab JSON of recent posts by
a user:
curl "https://brennen:[brennen's password goes here]@api.pinboard.in/v1/posts/recent?count=25&format=json"
…but then I thought ok, this is dumb. I know there’s RSS, so why not just use
a standard format that could pull from other sources as well?
curl https://feeds.pinboard.in/rss/u:brennen/
Further thoughts: Instead of doing this dynamically on the server, I could
just periodically pull data into the p1k3 archives and render it using some
service. I’m getting ever-more leery of running any dynamic code where I don’t
have to, and even considering rewriting all of the p1k3 stuff to generate
static files instead of building pages on the fly, so maybe this would be a
good experiment.
# Monday, January 18
# moved to p1k3.com
I’ve decided to pick this project back up, but it seems like I’ll probably be
better at updating it if I integrate it into
p1k3.com. I’ve copied all of these entries over into
the p1k3 tree, and new ones will appear there, but I’ll leave this document
in place since I feel like it’s uncool to break links.
# tools & toolchains for data munging & analysis
# csvkit
This is super handy. Wish I’d started using it sooner:
csvkit is a suite of utilities for converting to and working with CSV, the
king of tabular file formats.
…
csvkit is to tabular data what the standard Unix text processing suite (grep,
sed, cut, sort) is to text. As such, csvkit adheres to the Unix philosophy.
- Small is beautiful.
- Make each program do one thing well.
- Build a prototype as soon as possible.
- Choose portability over efficiency.
- Store data in flat text files.
- Use software leverage to your advantage.
- Use shell scripts to increase leverage and portability.
- Avoid captive user interfaces.
- Make every program a filter.
– csvkit 0.9.0
# jq
Also super handy, if a notch less intuitive. Powerful DSL / pretty-printer /
filter for working with JSON records at the command line.
# systemd notes