$""$o
$" $$
$$$$
o "$o
o" "$
oo"$$$" oo$"$ooo o$ "$ ooo"$oo $$$"o
o o o o oo" o" "o $$o$" o o$"" o$ "$ "oo o o o o
"$o ""$$$" $$ $ " o "" o" $ "o$$" o$$
""o o $ $" $$$$$ o $ ooo o""
"o $$$$o $o o$ $$$$$" $o " $$$$ o"
""o $$$$o oo o o$" $$$$$" "o o o o" "$$$ $
"" "$" """"" ""$" """ """ "
"oooooooooooooooooooooooooooooooooooooooooooooooooooooo$
"$$$$"$$$$" $$$$$$$"$$$$$$ " "$$$$$"$$$$$$" $$$""$$$$
$$$oo$$$$ $$$$$$o$$$$$$o" $$$$$$$$$$$$$$ o$$$$o$$$"
$"""""""""""""""""""""""""""""""""""""""""""""""""""$
$" o
$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$"$$
Left to right: really old LG flip phone, hibernating Alcatel Go Flip V, active Sunbeam Orchid. It's too bad they get bigger over time!
My current phone
I currently use a Sunbeam Orchid, which is a flip phone running on a fork of Android called “BasicOS” that doesn’t have a browser or apps.
I like the philosophy of the company — they’re mennonites making specific technology choices — but I’ve actually never had a phone with a browser or apps before, so a lot of the thoughtful software constraints of this phone go over my head.
Because I don’t use apps, I don’t need a data plan, and so having a phone is pretty cheap (around 30 bucks a month). I’m at my computer most of the day, and have an iPad, so I’m app literate but I don’t choose or need to bring apps with me when I leave the house.
The one “spec” I genuinely care about is battery life. Unfortunately, the Orchid doesn’t fare well here: I only get about a day and a half on a charge. My previous flip phone (an Alcatel Go Flip V) had a battery that would last about 7 days or so. The one I used before that lasted around 2 weeks. I don’t generally agree with the sentiment that technology has gotten worse during my lifetime, but having been largely free from a charger until relatively recently, it sucks to re-enter a charger tethered lifestyle. Sunbeam is at least transparent about this mostly being the fault of VoLTE radios. We’ll see how long I last with the Orchid’s smartphone-esque battery life; I might just return to my older Alcatel flip phone, even though its software for things like group texting is really bad compared to the Orchid.
How I get around
Most of my friends bought a smartphone sometime in the past ten years, but there are a couple of holdouts (Josh and Nobu) and sometimes we compare notes on how you can make it work. Here are a number of useful patterns that have supported my flip phone lifestyle:
When I’m going somewhere, I’ll try to study directions beforehand. It’s relatively easy to memorize major transit or bike arterials, but the trick I’ve learned for remembering someone’s house address or apartment number is to write a temporary little song with it in the lyrics (something like this). This works really well! If a route is super torturous, or I’ve got multiple stops, I’ll take notes before-hand on a tiny piece of paper.
When I’m lost, there’s usually a physical map nearby. Many subway stations have detailed bus maps, but by far the best tool for unexpected wayfinding is a Citibike kiosk. Even if you’re not a Citibike user, the kiosk lets you use a little map showing where you are, and you can zoom and drag it and see street names and everything. The LinkNYC kiosks are also sometimes useful when you just need to Google something nearby.
Because my phone’s battery life is usually pretty long, the fear of being “out of battery” largely goes away, and so the worst case scenario when I’m out and about is that I have to figure out who might be sitting near the internet and can help me look something up. I usually call Kathryn when this happens; it sort of feels like a quotidian version of when they call an “operator” in The Matrix.
I sometimes end up in a situation where some kind of gatekeeper has made an assumption that everyone is expected to have a smartphone in order to do something. This happened most recently at my dentist, where they wanted me to sign a waiver on a website to get my teeth cleaned. It’s oddly freeing to show them your phone and say “I can’t do that”; sometimes this works, and they’ll happily let you use some paper-based alternative instead. When that doesn’t work, my hail mary strategy is to ask to use the gatekeeper’s smartphone. Something about the personal phone boundary is so sacrosanct that often a gatekeeper will blink and just let you do whatever it is you were trying to do without requring whatever website or app they wanted you to use (which is probably how it should be anyways!)
sms2tweet
One of the great pleasures of having a phone without a data plan used to be that I could tweet stupid stuff out in the field, untethered from the feedback loop of Twitter’s website, by texting an official Twitter phone number which is 40404. In a textbook case of one person ruining something fun for everybody, Twitter shut down tweeting via SMS in 2019 when its former CEO Jack Dorsey was hacked.
I’d always imagined it would be pretty easy to rebuild a simple tweet via SMS app myself, but it was perpetually one of these software projects that never quite justified the effort whenever I sat down to pre-write the code in my head. And so I just never built it. But when I got my latest flip phone I decided it was time to treat myself to this capability again, and I wrote a piece of software called sms2tweet that has reinstated write-only tweeting from my phone. It’s fun to truly shout into the void this way!
To my fellow flip phone freaks
I love hearing from other people who use a flip phone. At this point I think I know about 4 or 5. But if you’re reading this and use one: email me!
About a year ago, while looking up directions, I noticed that a prominent new place name had been added to the map in western Queens:
I’d never heard of “Haberman” before. The name of the neighborhood that people who live here would recognize is Maspeth (which you can see up-and-to-the-right of Haberman). Is Haberman even a real neighborhood? Why did Google put this giant Haberman label on the map?
“Haberman” past & present
If you Google “haberman”, you will quickly discover that Haberman was the name of a former Long Island Railroad station that stopped running trains in 1998. It was located on the aptly-named Rust Street. The station was originally placed there to service workers at “Haberman’s National Enameling and Stamping Company”, a factory located nearby in the early 20th century.
But knowing that the label on the map draws from this rich piece of railroad trivia still doesn’t answer the question of why a big fat label was put on the map in 2019!
At some point, I had to check Haberman out. Is it a real place? Are there Haberman locals, cafes, bars, etc? Its parent neighborhood Maspeth is a mostly industrial neighborhood in Queens, with some small residential pockets at the edges. In general in New York City, you run into very few people who live in Maspeth.
Luckily, an opportunity presented itself: we had bought a rug made in Turkey, and because it was bulky & from abroad, UPS saw fit to deliver it to the UPS regional package waystation rather than our house. This waystation happens to be located deep in the heart of Haberman! FedEx’s package waystation is also located nearby, which makes “cavernous package purgatory” a kind of cottage industry in Haberman.
But walking around Haberman, it was clear that this didn’t seem to be the case here. The area mostly felt like a rare 100% industrial neighborhood in New York City. There aren’t many sidewalks, and many of them are borderline unwalkable (via double parking from NYPD or DOT workers in nearby fleet repair shops) or unmaintained. Lots of produce warehouses, factories, repair shops.
Names on the land
Returning to Google to try to find answers about why the fake name “Haberman” was placed there, I noticed a trend: a lot of other websites made reference to Haberman. Their unifying charecteristic was that they all looked programatically generated; sort of like how all lyrics websites feel like tiny tweaks to the same template, all of these websites were some variation on fake-local data (“Find local businesses in Haberman!”, “Haberman Bed Bug Exterminators”, etc.). This suggested that they were all using the same piece of data to generate their websites…
One of these sites gave a clue: the “GNIS ID for Haberman” is 972582. GNIS is the Geographic Names Information System, run by the U.S. Geological Survey. It’s a gazetteer, a dictionary of place names. The entry for Haberman notes several interesting pieces of information:
it’s a “Populated Place”, a category defined by the GNIS as “Place or area with clustered or scattered buildings and a permanent human population (city, settlement, town, village). A populated place is usually not incorporated and by definition has no legal boundaries. However, a populated place may have a corresponding “civil” record, the legal boundaries of which may or may not coincide with the perceived populated place. Distinct from Census and Civil classes.“
it was added to the GNIS via the “U.S. Geological Survey. Geographic Names Phase I data compilation (1976-1981)”
specifically, it was added on January 23, 1980
From further reading about the GNIS, I learned that many place names were originally added by government employees manually transcribing the names from paper maps or surveys that the USGS maintains…which made me want to find that map, and understand what “Haberman” looked like on January 23, 1980!
Luckily, the USGS has a wonderful and free viewer for all of the historical maps in their collection called “TopoView” (check it out here). You can filter by date, and I looked at all maps within the area of “Haberman” that predated 1980. The latest map the USGS had from before 1980 is called “Brooklyn 1967” and looks like this — there’s Haberman!!!!!
The "Brooklyn 1967" map that the Haberman GNIS name is drawn from (viewable here). You can see labels for Haberman, Maspeth and Penny Bridge
Something stood out to me immediately: the letter spacing for “Haberman” is subtly different from the label for “Maspeth”.
Compare these former train station labels:
Bushwick Junction
Haberman
Penny Bridge
…with these neighborhood labels, where the letters are more spaced out:
Maspeth
Sunnyside
In GNIS, if you look up Maspeth or Sunnyside, they are also classified as a “Populated Place”; this designation makes sense to me, as they remain places where people live and do human activity! But looking at the former train stations, some of these are also classified as populated places — Haberman and Bushwick Junction — while others are classified as a “Locale” — Penny Bridge (a Locale is defined as “Place at which there is or was human activity; it does not include populated places, mines, and dams (battlefield, crossroad, camp, farm, ghost town, landing, railroad siding, ranch, ruins, site, station, windmill)”). There was clearly some inconsistency in how these were classified.
Here’s my theory
Back in the dark winter hours of January 23, 1980, a government employee was tasked with transcribing place names from the “Brooklyn 1967” map into the Geographic Names Information System. Maybe they get tripped up by the typography…or maybe they’ve never been to New York, and wouldn’t know that this place name that appears to be placed like a neighborhood is just a lowly train station…at any rate, Haberman was entered into the literal public record as a “Populated Place”.
Fast forward to the present day, when corporate overlords like Google and Apple compete for their maps’ accuracy, buying or acquiring mapping data at great cost. A data set like the Geographic Names Information System probably serves as a useful baseline for their maps in the US: it’s hand-curated, and (I assume) generally quite accurate. Probably when their own proprietary neighborhood data appears thin in a given area, they fall back to showing any “Populated Place” names as a best-guess for a likely neighborhood name. But because this is done with no editorial oversight, this analog mistake from 1980 lives on in the devices of literally millions of Google Maps users as a 100% fake neighborhood.
Know anything further about Haberman? Got hot Haberman tips? Are you a Google Maps or USGS employee? Email me!
And long live Haberman!!!!!
Update 8/7/2019: Google erased Haberman!
After this percolated on the internet a bit, Google appears to have quietly erased the Haberman map label:
Booooo! Apple and Microsoft still have it, though:
Somewhere in the dead zone after the 2016 election, adrift in a season of wanting to pour some energy into something hopeful, I read Mike Migurski’s two blog posts 12 about legislative redistricting and gerrymandering. These are worth reading in full, but the basic idea is that it should be possible to create a “court-friendly measure” for evaluating the partisan effects of the makeup of election districts at the state level, which would allow courts to determine which redistricting plans are representative and which are naked power grabs by Republicans.
One thing that’s clear from Mike’s posts is that the spatial data required to do this kind of work is hard to come by. These are essentially shapes on the map that represent something called an “election district”. Confusingly this is NOT the same thing as a state legislative district (who you vote for in state senate and assembly elections), or a federal legislative district (who you can vote for in federal elections for the house of representatives), though it can determine who represents you from all three of these. The result is a “confusing patchwork quilt created by your state’s redistricting commission”.
Nathaniel Kelso and Mike Migurski’s election-geodata project on GitHub aims to be a repository for exactly this this kind of spatial data that’s relevant to evaluating gerrymandering — ”precinct shapes (and vote results) for US elections past, present, and future”. The data comes from a patchwork of sources, and there is this handy cell-phone-network style “coverage” map of places that do or don’t have election district data:
Dark green = newer 2016-2017 precincts, Medium green = 2014-2015 precincts, Light green = 2011-2012-2013 precincts, Light brown = older 2010 precincts, Medium brown = missing precincts
I was immediately struck that the coverage my own fair state of New York looked abysmal! Here’s a zoom:
That’s basically the New York City counties (Kings, Queens, New York, Bronx, Richmond), Ontario, and Rensselaer counties, + the 2010 census data (shown in white). I was curious whether this map could possibly be true — is NY state election data really this spotty? — and ran into the reality that each county in New York (there are 62) is responsible for maintaining its own district shapes! Some counties like NYC have great publicly available data, but most have none at all.
Digging in further, I arrived at an interesting website for a government entity called LATFOR, which somehow stands for “NYS Legislative Task Force on Demographic Research and Reapportionment”. This appeared to be a good potential source for election data!
In 2017 I emailed them, describing my quest for election district shape data for use in a public-spirited data project, and how to get in touch. I got a cryptic email follow-up, with a phone number from a demographer who works for the commitee. Getting on the phone with this person had the quality of Kafka’s “The Castle”: every round of correspondence required several back-and-forths of leaving and responding to voicemails, and our conversations felt observed, operating under unseen rules or procedures that prevented this person from communicating clearly about what election data the LATFOR had and whether or not I could have it. It really felt like this:
“There is no telephone connection to the castle, there’s no switchboard passing on our calls; if we call someone in the castle from here, the telephones ring in all the lower departments, or perhaps they would if, as I know for a fact, the sound was not turned off on nearly all of them.
Now and then a tired official feels the need to amuse himself a little—especially in the evening or at night—and switches the sound back on, and then we get an answer, but an answer that is only a joke. It’s very understandable. Who has the right to disturb such important work, always going full steam ahead, with his own little private worries?”
Helpfully though, in the end the person I spoke to said that there might be more “formal” ways of requesting the data, at which point we concluded our correspondence. Reading between the lines, I assumed he meant that I should send him a Freedom of Information Law request, which allows certain information in the public trust to be made accessible if you know what to look for, and are willing to send formal letters to the appropriate government entities.
Typically for myself, I proceeded to let this project lapse for an entire year, until I was inspired by an unrelated project to request on Muckrock the source code for the Seattle public transit “fare enforcement” software (which lead to the requester being sued (!) by a software company). I had never heard of Muckrock before, which (in brief) automates all of the letter-writing and correspondence of making freedom of information requests for you! For the genuinely low cost of $5. It’s perfect for people like me who find it unreasonably difficult to do any errand involving the post office.
After a few rounds of back and forth with LATFOR, facilitated by the staff of Muckrock, I got the goods! Here is the related FOIL thread, publicly-available on Muckrock. The result is a ZIP file including “any geographic shapefile data for election districts within New York state, from the even election years of the past decade: 2008, 2010, 2012, 2014, 2016 and 2018”.
What followed was essentially a data entry task. The files were organized by county, but predictably, every county did things differently. File and column names were all over the place (“Electin Districts”, “nyed.shp”, “YATEVOTE.SHP”). Once the files were in place, there was still some some work to be done assessing what the spatial reference system of each of the map files was, with intermediate results looking like the counties of NY had experienced tectonic drift:
After some help geographically wrangling the data from Nathaniel Kelso and Mike Migurski, the data was added, with the final New York state result looking like this:
Many more counties now have some more recent election coverage, though not all; the data dump only included shapefile data for 38 of New York’s 62 counties 3. The remaining white shapes on the map are counties that either had data from before 2010 (which is when all counties have some Census-provided data), or have no data at all beyond the Census-provided data.
This contribution to the election-geodata project is obvoiusly one of many, and hopefully will assist the work that organizations like PlanScore, which evaluates partisan outcomes of electoral redistricting shapes, are able to do in New York state.
After leaving a job where it felt like I’d orphaned a rich history of chats with friends that were completely unrelated to that job, marooned somewhere in Slack’s data centers, I wanted to recalibrate how & where I spent time chatting on the internet. My primary goal was to continue chatting with my friends Tom and Thomas, coworkers at this old job, but my loftier goal was to find or make a chat app that satisfied my increasingly obscure list of user demands. This is the origin story of big boy chat.
self-hosted: Let’s Chat
There are lots of lists on the internet of “self-hosted” equivalents of various popular cloud-based apps (like Slack). The principle of these lists, which seems admirable, is that by running your own version of a networked app like Slack or Facebook or whatever, you exert greater control on how these apps work. In practice, though, many of these self-hosted equivalents feel like uninspired or lesser versions of the thing they seek to replace.
The self-hosted chat app I initially landed on is called Let’s Chat. It satisfied some of my requirements (drag-and-drop image upload, persistence), and was amenable to modifications in order to support some things that weren’t available out of the box (private by default, some basic tweaks to the UI). We used this to chat for quite some time, and it was a nice “third space” that wasn’t Twitter, email, or group messages.
Over a longer period, there were a few aspects of running this chat app began to grind on me. It required a MongoDB database, which has non-trivial space and CPU requirements. But more importantly, running a database meant that I was on the hook for things like security AKA ensuring that our throwaway chat messages weren’t going to be hacked into the open.
While running a database and server could be justified as a necessary chore, I think it points to a weirder flaw of all self-hosted chat apps: it feels wrong to need to “self-host” a chat app in the first place! I’m happy to maintain a space like this for friends, but I had an abstract notion that something like chat between two or more people shouldn’t require a dedicated server and complex infrastructure. It should be as simple as software running on different peoples’ computers talking to each other directly, without any intermediaries.
peer-to-peer: hyperlog, hypercore and cabal
I started looking into peer-to-peer chat apps and protocols. I found substack’s chatwizard, which stored its data on hyperlog, a peer-to-peer friendly database. At a high level, writing a chat room app with hyperlog as its database would allow different chatters to create their own chat log, reference any previously “seen” chat messages when new chat messages are added to the log (to establish an ordering of the chat messages), and is able to live sync a chat history between two or more peers. The only thing it’s missing was a built-in way to verify that chat messages came from their sender and not some imposter (crypto!); while there are API hooks for doing this in hyperlog, the actual implementation is left to the user.
Around the time I started tinkering with hyperlog, the community of people building and using the peer-to-peer Dat protocol and Beaker Browser were gaining momentum. Dat is built on top of hyperdrive, which in turn is built on top of hypercore, which is a lot like the hyperlog protocol I’d been looking into. Lots of hyper! Hypercore automatically signs and verifies individiual messages appended to its log (crypto!), but doesn’t yet allow multiple people to write to a single log, which makes it harder to use for something like chat.
Enter cabal: some enterprising people in the Dat community decided to create a chat protocol called Cabal, which uses multifeed under the hood to allow chatting peers to copy any hypercore databases that a given peer has seen into their own hypercore, which is writeable. The distinction is subtle, but multifeed is what makes using hypercore as its peer-to-peer database possible for a chat use-case.
With cabal/multifeed/hypercore as a foundation, I was able to get a version of an app I started calling “p2p-party-line” working. This chat app uses cabal to create a single chat room, where people can choose handles and add some light markdown and HTML to chat messages. The chat log itself is synced using the WebRTC protocol, which requires a signalling server to introduce chatting peers to each other. While this breaks the purity of this being an exclusively peer-to-peer app, the server is only used at the beginining to find out who else is hanging out in the chat; once your computer finds those other chatters, all subsequent chat room syncing happens directly between your computers and no chat messages are stored or read or intercepted by a server at all. In fact, because of the way that chat room identifiers are hashed (this is the long string of characters in the URL), the server doesn’t even know about individual chats, making them effectively private until or if someone in the chat shares the URL publicly.
last one out please turn off the lights
One unintentional consequence of this chat room architecture is something I’ve been calling “last one out please turn off the lights” mode. p2p-party-line is a browser app, and while it could save any data for any chat you’ve participated in for offline or later use, it doesn’t. But as long as someone is still in the chat room, the history is preserved, even if you leave and come back to the chat room. This creates an interesting dynamic: as long as people are still chatting, it operates like a regular chat room. But when everyone’s ready to start over or call it quits, none of the chat history sticks around or is saved somewhere. The network acts as the chat room’s hard drive: when no one’s online, the chat room disappears.
This distributes maintanenance of the chat room to everyone participating, and away from any single participant. You could/can write a bot which “seeds” the chat history permanently, and I’ve done this on occasion to support situations where I want to chat with people who aren’t consistently available around a similar time frame. But the ephemeral aspect of the default mode encourages using big boy chat as something like a burner chat room, used for specific times and places but mauybe not intended for permanent or perpetual use. The “grain” of p2p-party-line encourages you to create a new chat as needed (they’re cheap/free!) rather than maintaining a single, monolithic chat room that lives forever. Vive big boy chat!
I’ve had the itch to read an Elvis book for a while. I can’t really explain this. I’m not an Elvis fan and didn’t really know anything about him, but for whatever reason in 2018 “Elvis” felt like a blindspot I wanted to attempt to correct. I saw “Last Train to Memphis” on the shelves at my local library and checked it out.
“Last Train to Memphis” rarely treads far from the chronological telling of Elvis’ life, from when he was born in Tupelo to when his mom dies at Graceland (there’s a second book covering the 2nd half of his life/career). Pretty early on in the book I realized that the prose was going to be completely stuffed with musical references that meant basically nothing to me as read on the page: country & western musicians, rhythm and blues singer/songwriters, Memphis-famous producers and DJ’s, etc. What’s nice is that these references aren’t even very Elvis heavy; they’re a spring mix of songs and artists he was exposed to growing up, connections he made touring with Hank Snow, musicians hired to write songs for him, musicians that re-recorded his songs with new twists.
I was initially tempted to search YouTube/Spotify for every unfamiliar song that got mentioned, but this started to feel sisyphean. I also felt hanging above my head the presence of something I think of as the “non-fiction shot clock”: if I don’t keep reading a non-fiction book at a reasonable pace, pretty quickly the book becomes unfinishable.
So an idea started developing in the back of my mind as I was reading: what if I can just mine the text for the musical footnotes after I’ve finished the book? This would allow me to carry on reading the book, confident with the knowledge that I’d catch up with all of the musical texture once I was done reading.
I managed to find a second-hand digital copy of the book which allowed me to process it as a plain text file, and set about trying to figure out the quickest path to extracting some of the musical annotations from the text. The heuristic I came up with for picking out songs was:
if it’s in double quotes
and the first word starts with a capital letter
and there’s something like 1 to 10 words
…it’s likely to be a song. Expressed as a regex this looked like:
grep -o -P "\"[A-Z](?:[A-Za-z0-9',]+[^A-Za-z0-9',\"]*){1,10}\""
# matches "Flip, Flop and Fly" or "Pins and Needles in My Heart,"
…to which I added further processing which favored quoted strings that had greater than 50% capital-case letters. This ruleset is a little lossy (I’m sure it misses some songs) but it provided a good starting point, narrowing things down to a list of 500 or so potential song candidates, located chronologically where they were found in the text (thus loosely following the chronological framing of Elvis’ life in the book).
From there, I needed to further weed out false positives (lots of matches for “Elvis Presley” or headlines like “These Are the Cats Who Make Music for Elvis”), and also try to match song names with musicians’ names:
For this purpose I wrote a little utility which took each of the potential song names and looked nearby in the text for potential musician names, using something called named entity recognition (a computerized way of looking for names in parts of speech). From there, the tool presents all possible musician names for a given song and prompts you to choose the “correct” one. Many times the book would cite a song, where the whole paragraph described a long chain of artistic custody for who had written, recorded or licensed the song, so this was no trivial task!
After doing this data entry-ish task, I ended up with 96 or so songs paired with artist names. Despite the fact that we live in an era where there are multiple, competing, corporate, infinite jukeboxes, music metadata is still famously messy. There’s nothing like an ISBN or URL for a given song recorded by a given artist that might allow you to find it right away on Spotify or YouTube. The closest database that even approximates something like this (with open licensing) is Discogs, but its catalog isn’t completist in the same way that something like Wikipedia is. So to turn this list of songs/artists into something that I could play as music, I turned to YouTube search.
I experience YouTube as the only entity that in any way satisfies the spirit that Napster had when it first launched. I look for music on YouTube and it’s mostly just there, no matter how rare. As a cultural institution, YouTube feels like a shaky foundation on which to build a multimedia Library of Alexandria, but it’s the Library of Alexandria we have.
An experience that’s bound to be familiar to anyone seeking out music on YouTube is the set of snap judgements you make when trying to assess which of the YouTube search results contains the specific version of the song you’re looking for. I can’t relate to people who wax nostalgic for the experience of shopping for records in a record store, because we have something infinitely more insane and interesting on the YouTube search results page. Obscure video naming conventions. A whole set of aesthetics around video thumbnails. Impenetrable uploader and commenter jargon. The reputational marks of the uploader (filming a spinning record, “lyrics video”, ORIGINAL and RARE).
Though YouTube has an official API, I chose instead to write a tool which paid tribute to this messy process of picking a YouTube: it scrapes the search results and shows the titles and thumbnails of potential videos, piping out the URL’s any videos picked onto the command line for reuse elsewhere.
One of my favorite Twitter bots, from Casey Kolderup, is called Sailor Clones. It does something deceptively simple: slowly but surely it trawls thru a secret list of terms, emitting iterations on the classic “red sky at morning / sailors’ delight” rhyme. For context, here’s the original rhyme:
Red sky at night, sailors’ delight. Red sky at morning, sailors take warning
And here are a few recent iterations, generated by Sailor Clones:
AMERICANS at morning, fish trimmers take warning. AMERICANS at night, fish trimmers’ delight
silence at morning, aerobics instructors take warning. silence at night, aerobics instructors’ delight
getting blood on it at morning, timekeeping clerks take warning. getting blood on it at night, timekeeping clerks’ delight
What’s wonderful about this bot is that in re-examining a venerated turn of phrase (“Red sky at morning…”), it suggests an alternate universe where there might be other, equally-valid versions of the same aphorism. Which in turn reminds me of brute-force attacks, where a computer is put to work trying to guess a password or secret by trying as many combinations as possible.
The Sailor Clones bot has the advantage that the rhyme under attack already has some notable variations, making it feel natural to substitute pieces of it to find new versions. But this suggests that there might be other source material out there so stubbornly “canonical”, that any brute-forced iteration on the original phrasing might result in a new version that is so unambigiuously “right” that it would feel like discovering a new element in the periodic table of elements.
One such phrase presented itself to me while I was doing the thing you do where you say something over and over until it sounds dumb — that phrase being working hard, or hardly working?. This is such a dumb phrase, it makes me laugh out loud to even think it. But would it be possible to invent a new “working hard, hardly working”?
My humble contribution towards this endeavor is the Twitter bot @hardlyworkingor. As with bots like this, you need not follow it to derive value from it; you can rest quietly knowing that one day, it might discover a new “Hardly working” and we will all be the better for its efforts.
I’ve always told people that for each person there is a sentence — a series of words — which has the power to destroy him. When Fat told me about Leon Stone I realized (this came years after the first realization) that another sentence exists, another series of words, which will heal the person. If you’re lucky you will get the second, but you can be certain of getting the the first: that is the way it works. On their own, without training, individuals know how to deal out the lethal sentence, but training is required to deal out the second.
I’m struck by this piece of advice offered by GNUPG, a piece of software used in cryptography:
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
The idea that random keystrokes help to make a “good random number” is pleasantly analog. A connoiseur might look at a random number and deduce that it was generated with an old computer, or a computer with a sticky keyboard, or a computer that was infrequently used.
When first encountering computers in elementary school, I remember a kid in my class who invariably tried to convince other kids that his random keystrokes, made while the computer started up, were the only thing that allowed the computer to startup at all. It’s wonderful that this turns out to be, in a twisted way, true!
I’m reminded of a favorite (fake) theory about ancient greek pottery:
A pot or vase could be “read” like a gramophone record or phonograph cylinder for messages from the past, sounds encoded into the turning clay as the pot was thrown.
The underyling myth notwithstanding, it’s somehow easy to imagine ancient potters choosing to make pottery in purpose-built rooms because it “sounds better” — that the ambience of the room somehow makes its way into the clay form. In the same vein, you might imagine someone who downloads music mp3 “naked” or “while on vacation”, on the theory that it sounds better that way.
I used the yes command (Linux/Unix/Mac OS X) for the first time the other day. Here’s a description of what it does:
YES(1) BSD General Commands Manual YES(1)
NAME
yes -- be repetitively affirmative
SYNOPSIS
yes [expletive]
DESCRIPTION
yes outputs expletive, or, by default, ``y'', forever.
HISTORY
The yes command appeared in 4.0BSD.
By way of example:
17:01 ~ $ yes
y
y
y
y
y
y
y
y
y
y
y
y
Seems like an invaluable tool for promotion:
17:01 ~ $ yes "the regex king"
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
the regex king
…law enables individuals and institutions to send laser beams (of varying quality, depending on cost) from one point in time and space to another, saying “this is what will take place; this is what we agree has happened; this is what must happen; these are the conditions of co-operation”. Law, more than the media, allows money to be converted into publicly-agreed and enforceable statements.
This reminded me of “fine print” from radio ads. An example (from the internet):
This advertising style is very funny, but very possibly legally binding as well! Which seems to illustrate this legal “beam” concept: that by witnessing a spew of specially chosen legal words, you convert a virtual premise into a real premise. And that the only tools at your disposal are legal words pointing in the opposite direction.
is not a keyboard pattern, such as qwerty, asdfghjkl, or 12345678.
Given these definitions, it seems like a “strong password” might (strongly) correlate with having a “boring account”, while a “weak password” might lead to more of a “leisure account”.
Jacob wrote in about a video game called Faxanadu, which uses transcendental passwords to save state:
the way you save the game is by visiting a a Guru, who tells you your mantra, which usually looks like LlkjjIOuJNLkjJ. when you restart, you enter your mantra and it takes you to the place you were at.
The mantras also reward saving state. Compared to playing the game straight thru, saving with a mantra gets you extra health, money, etc. He describes the mantras as less like cheats, more like “advantageous save states”, which is a useful category to define.
If passwords confer advantageous states, it stands to reason that logging out (saving state) often is the only way to get more money and health from an account. I’ve come to think of this practice as doing a Rip Van Winkle logout.
In Rip Van Winkle’s case, logging out for ~20 years conferred “the luxury of sleeping through the hardships of war”. The state that emerges from logging out of Gmail/Facebook/Twitter is quite a more opaque. Nonetheless, a few recommendations:
The Rip Van Winkle logout
use a password that includes:
a friend’s name
a family member’s name
a preferred dictionary word
a common pattern such as “666”, “420” or “The quick brown fox jumps over the lazy dog”
a previous password you remember fondly
logout & login with this password as needed (when the winds of health/money/luck change)
Allen inadvertently participated in a panel on International Art English, the Triple Canopy analysis of the English language as represented in art press releases.
I didn’t listen to the entire panel, but enjoyed Allen’s small contribution:
Using the traffic example, if formal language is my work is about traffic, or this is a depiction of traffic and you describe how it’s doing that, then theoretical language wouldn’t actually be my work questions traffic, it would be what is traffic. And you’d write a text about traffic that accompanies the piece. You wouldn’t even describe the piece. That would be theoretical writing.
So International Art English is the language that results from saying that my piece is questioning traffic, and this is how it does it.
Where the International Art English article is concerned with objectively pointing, Allen’s description speculates that metaphysical consequences are implied (if not intended) by this style of writing.
Presumably most writers of art press releases don’t seek physical powers from their writing. But it’s interesting to consider how the writing might change if they did. By way of an example press release from the Triple Canopy article:
“Through an expansive practice that spans drawing, sculpture, video, and artist books, Kim contemplates a world in which perception is radically questioned. His visual language is characterized by deadpan humor and absurdist propositions that playfully and subversively invert expectations. By suggesting that what you see may not be what you see, Kim reveals the tension between internal psychology and external reality, and relates observation and knowledge as states of mind.”
In Game Genie style:
All perception questioned (World)
Expectations become inverted - playful
Expectations become inverted - subversive
Kim will show you false seeing
Add tension with internal psychology
Add tension with external reality
State of mind now has observation
State of mind now has knowledge
The Game Genie is a “plugin” that sits between a video game cartridge and the video game’s player. Because eletrical signals pass thru the Game Genie before hitting the game console, it allows the player to “fold, spindle, and mutilate” how the game is played. Usually this meant inputting cheat codes.
I’ve never used a Game Genie, but it’s a funny frame for how technology ends up getting used. There were a number of legal battles with Nintendo about whether or not it was legal to produce an unsanctioned “cheating system”. One wonders if the device was somehow the secret result of a fever dream of some Nintendo employee.
Notably, Wikipedia sez that the stress put on the cartridge by the Genie sometimes caused “units to be unplayable without the Game Genie present”. Perhaps over time, Game Genies slowly start to displace their hosts. If you leave them in long enough, the Genie plays.
Genie codes were released in booklets (periodically delivered by mail) which look like this:
CODE KEY IN . . . EFFECT . . .
1 AATOZA Start players 1 & 2 with 1 life
2 IATOZA Start players 1 & 2 with 6 lives
3 AATOZE Start players 1 & 2 with 9 lives
4 VATOLE Start player 1 with 8 lives and player 2 with 3
13 AEVAVIIA + AENEEITA Permanent turbo running
14 AXSETUAO + ESVAPUEV Super fast run for Mario
15 AZEEGKAO + EIEEYKEV Super fast run for Luigi
16 AXNAIUAO + ESNEAUEV Fast run for Toad
17 AZXALKAO + EIXATKEV Super fast run for Princess
The grammar and syntax are insanely consistent throughout. And there are reams and reams of these!
In the distant past, my friend Josh described a category of speech which performs an action in the process of being spoken. The existence of this category has stuck with me because it endows speech with powers similar to a spell or a computer command.
I wasn’t able to recall the name until finding a reference to How to Do Things With Words, a posthumous book by J. L. Austin. He defines a performative utterance as “the senses in which to say something may be to do something”. These utterances constitute a “speech act”. Wikipedia gets a little more nasty with the definition, referring to setentences “not being used to describe or state what one is ‘doing’, but being used to actually ‘do’ it.” The classic examples:
If you say “I name this ship the Queen Elizabeth,” and the circumstances are appropriate in certain ways, then you will have done something special, namely, you will have performed the act of naming the ship. Other examples include: “I take this man as my lawfully wedded husband,” used in the course of a marriage ceremony, or “I bequeath this watch to my brother,” as occurring in a will.
The way that Wikipedia phrases the Queen Elizabeth example is particularly striking, almost reading like a recipe or folklore! If you say [x] when “the circumstances are appropriate in certain ways” you will have done [y].
It’s easy to imagine footnotes buried in legal codes, stipulating what marriages emerge when you mess with the syntax of “lawfully wedded husband”: “legal wedding husband”, “wedding-law husband”, “legally weddable husband”, etc.