A Guile Steel smelting pot

By Christine Lemmer-Webber on Sat 06 August 2022

Last month I made a blogpost titled Guile Steel: A Proposal for a Systems Lisp. It got more attention than I anticipated, which is both a blessing and and curse. I mean, mostly the former, the curse isn't so serious, it's mostly that the post was aimed at a specific community and got more coverage than that, and funny things happen when things leave their intended context.

The blessing is that real, actual progress has happened, in terms of organization, actual development (thanks to others mostly!), and a compilation of potential directions. In many ways "Guile Steel" was meant to be a meta project, somewhat biased around Guile but more so a clever name to start brewing some ideas (and gathering intelligence) around, a call-to-arms for those who are likeminded, a test even to see if there are enough likeminded people out there. The answer to that one is: yes, and there's actually a lot that's happening or has happened historically. I actually think Lisp is going through a quiet renaissance and is on the verge or a major revival, but that's a topic for another post. The goal of this post is to give a lay of the landscape, as I've seen it since then. There's a lot out there.

If you enjoy this post by the way, there's an IRC channel: #guile-steel on irc.libera.chat. It's surprisingly well populated given that people have only shown up through word of mouth.

First, an aside on language (again)

Also by-the-way, it's debatable what "systems language" even means, and the previous post spent some time debating that. Language is mostly fuzzy, and subject to the constant battle between fuzzy and crisp systems, and "systems language" is something people evoke to make themselves sound like very crispy people, even though the term could hardly be fuzzier.

We're embracing the hand-waviness here; I've previously mentioned that "Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue". Similarly, "Systems Language" is to "C/Rust" what "Roguelike" is to "Rogue".

My friend Technomancy put things about as well or as succinctly as you could: "low-level enough to bootstrap a runtime". We'll extend "runtime" to not only mean "programming language runtime" but also "operating system runtime", and that's the scope of this post.

With that, let's start diving into directions.

Carp and Scopes

I was unaware at the time of writing the previous post of two remarkable "systems lisps" that already existed, are here, now, today, and you can and maybe should use them: Carp and Scopes. Both of them are statically typed, and both perform automatic memory management without the overhead of a garbage collector or reference counting in a style familiar to Rust.

They are also both kind of similar yet different. Carp is written on top of Haskell, looks a lot like Clojure in style. Scopes is written in C++, looks a lot like Scheme, and has an optional, non-parenthetical whitespace syntax which reminds me a lot of Wisp, so is maybe more amenable to the type of people who fear parentheses or must work with people who do.

I can't make a judgement about either; I would like to find some time to try each of them. Scopes looks more up my alley of the two. If someone packaged either of these languages for Guix I would try it in a heartbeat.

Anyway, Carp and Scopes already are systems lisps of sorts you can try today. (If you've made something cool with either of them, let me know.)

Pre-Scheme

There's a lot to say on this one, despite its obscurity, enough that I'm going to give it several sub-headings. I'll get the big one up front: Andrew Whatson is doing an incredible job porting Pre-Scheme to Guile. But I'll get more to that below.

What the heck is Pre-Scheme anyway?

PreScheme (or is it Pre-Scheme or prescheme or what? nobody is consistent, and I won't be either) is a "systems lisp" that is used to bootstrap the incredible but "sleeper hit" (or shall we say "cult classic"?) of programming language runtimes, Scheme48. PreScheme compiles to C, is statically typed with type inference based on a modified version of Hindley-Milner, and uses manual memory management for heap-allocated resources (much like C) rather than garbage collection. (C is just the current main target, compiling directly to native architectures or WebAssembly is also possible.)

The wild things about PreScheme are that unlike C or Rust, you can hack on it live at the REPL just like Scheme, and you can even incorporate a certain amount of Scheme, and it mostly looks like Scheme. But it still compiles down efficiently to low-level code.

It's used to implement Scheme48's virtual machine and garbage collector, and is bootstrappable from a working Scheme48, but there's also apparently a version sitting around somewhere on top of some metacircular Scheme which Jonathan Rees wrote on top of Common Lisp, giving it a good bootstrapping story. While used for Scheme48, and usable from it today, there's no reason you can't use it for other things, and a few smaller projects have.

What's more wild about PreScheme is how incredibly good of an idea it is, how long it's sat around (since the 80s, with a lot of work happening in the 90s!), and how little attention it's gotten. PreScheme's thoughtful design actually follows from Richard Kelsey's amazing PhD dissertation, Compilation By Program Transformation, which really feels like the kind of obscure CS thing that, if you've made it this far in this writeup, you probably would love reading. (Thank you to Olin Shivers for reviving this dissertation in LaTeX, which otherwise would have been lost to history.)

guile-prescheme

Now I did mention prescheme and how I thought that was a fairly interesting starting point on the last Guile Steel blogpost, and I actually got several people reaching out to me saying they wanted to take up this initiative, and a few of them suggested maybe they should start porting PreScheme to Guile, and I said "yes you should!" to all of them, but one person took up the initiative quickly and has been doing a straight and faithful port to Guile named guile-prescheme.

The emulator (which isn't too much code really) has already worked for a couple of weeks (which means you can already hack PreScheme at Guile's REPL, and Andrew Whatson says that the "compile to C" compiler is already well on its way, and will likely be there in about a month.

The main challenge apparently is the conversion of macros, which are stuck in the r4rs era of Scheme. Andrew has been slowly converting everything to syntax-case. syntax-case is encoded in r6rs and even more appealingly r7rs-small, which begs the question: how general of a port is this? Does it really have to just be to Guile? And that brings us to our next subsection...

The Secret Society of PreScheme Revivalists

Okay there's not really a secret society, we just have an email thread going and I organized a video call recently, and we're likely to do another one (I hope). This one was really good, very productive. (We didn't record it, sadly. Maybe we should have.)

On said video call we got Andrew Whatson of course, who's doing the current porting effort, but also Richard Kelsey (the original brain behind PreScheme, co-author of much of Scheme48), Michael Sperber (current maintainer of Scheme48 and also someone who has used PreScheme previously commercially, to do some monte carlo simulation things for some financial firm or something curious like that), and Jonathan Rees (co-author of Scheme48, and one of my friends who I like to call up and talk about all sorts of curious topics). There were a few others, all cool people, and also me, hand-waving excitedly as usual.

As an aside, my wife Morgan says my superpower is that I'm good at "showing up in a room and being excited and getting everyone else excited", and she's right, I think. And... well there's just a lot of interesting stuff in computer science history, amazing people whose work has just been mostly ignored, stuff left on the shelf. It's not what the Tech Influencers (TM) are currently touting, it's not what a FAANG company is going to currently hire you to do, but if you're trying to solve the hard problems people don't even realize they have, your best bet is to scour history.

I don't know if it's true or not but this felt like one of those times where the folks who have worked on PreScheme historically seemed kind of surprised that here we had a gathering of people who are extremely interested in the stuff they've done, but also happy about it. Anyway, that seemed like my reading, I like to think so anyway. Andrew (I think?) said some nice things about how it was just exciting to be able to talk to the people who have done these things, and I agree. It is cool stuff. We are grateful to be able to talk about it.

The conversation was really nice, we got some interesting historical information (some of that which I've conveyed here), and Richard Kelsey indicated that he's been doing work on microcontrollers and wishes he could be using PreScheme, but the things clients/employers get nervous about is "will be able to actually hire anyone to work on this stuff who isn't just you?" I'd like to think that we're building up enough enthusiasm where we can demonstrate in the affirmative, but that's going to take some time.

Anyway, I hinted in the last part that some of the more interesting conversation came to, just how portable is this port? Andrew indicated that he thought that the port to Guile as he was doing it was already helping to make things more portable. Andrew is just focusing on Guile first, but is avoiding the Guile-specific ice-9 namespace of Guile modules (which in this case, from a standardization perspective, becomes a little bit too appropriate) and is using as much generic Scheme and SRFI extensions as possible. Once the Guile version gets working, the goal is then to try porting to a more standardized form of Scheme (probably r7rs-small), and then that would mean that any Scheme following that standard could use the same version of PreScheme. Michael Sperber seemed to indicate that maybe Scheme48 could use this version too.

This would actually be pretty incredible because it would mean that any version of Scheme following the Scheme standard would suddenly have access to PreScheme, and any of those could also be used to bootstrap a PreScheme based Scheme.

A PreScheme JIT?

I thought Andrew Whatson (flatwhatson here) said this well enough himself so I'm just going to quote it verbatim:

<flatwhatson> re: pre-scheme interest for bootstrapping, i think it's more
              interesting than just "compiling to C"
<flatwhatson> Michael Sperber's rejected paper "A Tractable Native-Code Scheme
              System" describes repurposing the pre-scheme compiler (more
              accurately called the transformational compiler) as a jit
              byte-code optimizer and native-code emitter
<flatwhatson> the prescheme compiler basically lowers prescheme code to a
              virtual machine-code and then emits that as C
<flatwhatson> i think it would be feasible to directly emit native code at
              that point instead
<flatwhatson> https://www.deinprogramm.de/sperber/papers/tractable-native-code-scheme-system.pdf
<flatwhatson> Kelsey's dissertation describes transforming a high-level
              language to a low-level language, not specifically scheme to C.
<flatwhatson> > The machine language is an assembly language written in the
              syntax of the intermediate language and has a much simpler
              semantics. The machine is assumed to be a Von Neumann machine
              with a store and register-to-register instructions. Identifiers
              represent the machine’s registers and primitive procedures are
              the machine’s instructions.
<flatwhatson> Also, we have code for the unreleased byte-code jit-compiling
              native-emitting version of Scheme 48:
              https://www.s48.org/cgi-bin/hgwebdir.cgi/s48-compiler/

(How the hell that paper was rejected btw, I have no idea. It's great.)

Future directions for PreScheme

One obvious improvement to PreScheme is: compile to WebAssembly (aka WASM)! This would be pretty neat and maybe, maybe, maybe could mean a good path to getting more Schemes in the browser without using Emscripten (which is a bit heavy-handed of an approach). Andrew and I both think this is a fun idea, worth exploring. I think once the "compile to C" part of the port to Guile is done, it's worth beginning to look at in earnest.

Relatedly, it would also, I think, be pretty neat if guile-prescheme was compelling enough for more of Guile to be rewritten in it. This would improve Guile's already-better-than-most bootstrapping story and also make hacking on certain parts of Guile's internals more accessible and pleasant to a larger part of Guile's existing userbase.

The other obvious improvement to PreScheme is exploring (handwave handwave handwave) the kinds of automated memory management which have become popular with Rust's borrow checker and also appear in Carp and Scopes, as discussed above.

3L: The Computing System of the Future (???)

I mentioned that an appealing use of PreScheme might be to write not just a language runtime, but also an operating system. A very interesting project called 3L exists and is real and does just that. In fact, it's also a capability-secure operating system, and it cites all the right stuff and has all the right ideas going for it. And it's using PreScheme!

Now the problem is, seemingly nobody I know who would be interested in exactly a project like this even had heard of it before (except for the incredible and incredibly nice hacker pukkamustard is the one who made me even aware of it by mentioning it in the #guile-steel chatroom), and I couldn't even find the actual code on the main webpage. But there actually is source code, not a lot of it, but it's there, and in a way "not a lot of it" is not a bad thing here, because what's there looks stunningly similar to a very familiar metacircular evaluator, which begs the question, is that really enough, though?

And actually maybe it is, because hey look there's a demo video and a nice talk. And it's using Scheme48!

(As a complete aside: I'd be much more likely to play with Scheme48 if someone Geiser support for it... that's something I've poked at doing every now and then but I haven't had enough of a dedicated block of time. If you, dear reader, feel inspired enough to add such support, or actually if you give 3L a try, let me know).

Anyway, cool stuff, I've been meaning to reach out to the author, maybe I will after I post this. I wonder what's come of it. (It's also missing a license file or any indicators, but maybe we could get that fixed easily enough?)

WebAssembly

I had a call with someone recently who said WebAssembly was really only useful for C/Rust users, and I thought this was fairly surprising/confusing, but maybe that's because I think WebAssembly is pretty cool and have hand-coded a small amount of it for fun. Its text-based syntax is S-Expression based which makes it appealing for lispy type folks, and just easy to parse and work with in general.

It's stretching it a bit to call WebAssembly a Lisp, it's really just something that's designed to be an intermediate language (eg in GCC), a place where compiler authors often deign it okay/acceptable to use s-expressions because they don't fear that they'll scare off non-lispers or PLT people, because hey most users aren't going to touch this stuff anyway, right?

I dunno, I consider it a win at least that s-expressions have survived here. I showed up to an in-person WebAssembly meeting once and talked to one of the developers about it, praised them for this choice, and they said "Oh... yeah, well, we initially did it because it was the easiest thing to start with, and then eventually we came to like it, which I guess is the usual thing that happens with S-Expressions." (Literally true, look up the history of M-Expressions vs S-Expressions.)

At any rate, most people aren't coding WebAssembly by hand. However, you could, and if you're going to, a Lisp based environment is actually a really good choice. wasm-adventure is a really cool little demo game (try it!), all hand-written in WebAssembly kinda sorta. The README gives its motivations as "Is it possible (and enjoyable) to write a game directly in web assembly's text format? Eventually, would it be cool to generate wat from Scheme code using the Racket lang system?", and the answer becomes an emphatic "yes". What's interesting is that Lisp's venerable quasiquote does much of the heavy lifting to make, without too much work, a little DSL for authoring WebAssembly which results in some surprisingly easy to read code compared to generic WebAssembly. (The author, Zoé Martin, is another one of those quiet geniuses you run into on the internet; she has a lovely homebrew computer design too.)

So what I'd really like is to see more languages compiling directly to WebAssembly without emscripten as an intermediate hack. Guile especially, of course. Andy Wingo gave an amazing little talk on this where he does a little (quasi, pre-recorded) live coding demo of compiling to WebAssembly and I thought "YES!!! Compiling to WASM is probably right around the corner" and it turns out that's probably not the case because Wingo would like to see some important extensions to WASM land, and I guess, yes that probably makes sense, and also he's working on a new garbage collector which seems damn cool and like it'll be really good for Guile and maybe even could help the compiling to WASM story even before the much desired WASM-GC extension we all want lands, but who knows. I mean it would also be nice to have like, the tail call elimination extension, etc etc etc. But see also Wingo's writeup about targeting the web from last year, etc etc. (And on that note, I mean, is Webassembly the new Kubernetes?)

As another aside, there are two interesting Schemes which are actually written in WebAssembly, or rather, one written directly in hand-coded WASM named scheme.wasm, and one which compiles itself to Webassembly called Schism (which has a cool paper, but sadly hasn't been updated in a couple of years).

As another another aside, I was on a video call with Douglas Crockford at one point and mentioned WebAssembly and how cool I thought it was, and Crock kinda went "meh" to it, and I was like what? I mean it has ocap people you and I have both collaborated on it with, overall its design seems pretty good, better than most of the things of its ilk that have been tried before, why are you meh'ing WebAssembly? And Crock said that well, it's not that WebAssembly is bad, it's just that it felt like an opportunity to do something impactful, and it's "just another von neumann architecture", like, boring, can't we do better? But when I asked for specific alternatives, Crock didn't have a specific design in mind, just thought that maybe we could do better, maybe it could even incorporate actors at a more fundamental level.

Well... it turns out we both know someone who did just that, and (so I hear) both recently got nerdsniped by that very same person who had just such an architecture...

Mycelia and uFork

So I had a call with sorta-colleague, sorta-friend I guess? I'm talking about Dale Schumacher, and I don't know him super well, we don't get to talk that much, but I've enjoyed the amount we have. Dale has been trying to get me to have a video call for a while, we finally did, and I was expecting us to talk about our respective actor'y system projects, and we did... but the big surprise was hearing about Mycelia, Dale's ocap-secure hybrid-actor-model-lisp-machine-lambda-calculus operating system, and its equally astounding, actually maybe more astounding, virtual machine and maybe potentially CPU architecture design, uFork. We're going to take a major digression but I promise that it ties back in.

This isn't the first time Dale's reached out and it's resulted in me being surprised and nerdsniped. A few years ago Dale reached out to me to talk about this programming language he wrote called Humus. What's astounding personally about Humus is that it has an eerie amount of similarity to Spritely Goblins, the ocap distributed object architecture I've been working on the last few years, despite that we fully designed our systems independently. Dale beat me to it, but it was an independent reinvention in the sense that I simply wasn't aware of Humus until Dale started emailing me.

The eerie similarity is because I think Dale and I's systems are the most seriously true-to-form implementations of the "Classic Actor Model" that have been implemented in recent times (more true than say, Erlang, which does some other things, and "Classic" thrown on there because Carl Hewitt has some new ideas that he feels strongly should now be associated with "Actor Model" that can be layered on Dale and I's systems, but are not there at the base layer). (Actually, Goblins supports one other thing that makes it more the vat model of computation, but that isn't important for this post.) The Classic Actor Model says (hand-waving past pre-determinism in the general case, at least from the perspective of a single actor, due to ordering concerns... but those too can be layered on) that you can do pretty much all computation in terms of just actors, which are these funky distributed objects which handle messages one at a time, and while handling them are only able to do some combination of three things: (1) send messages to actors they know about, (2) create new actors (and get their address in the process, which they can share with other actors should they choose... argument passing, basically), and (3) designate their behavior for the next time they are handling a message. It's pretty common to use "become" for last that operation, but the curious thing that both Dale and I did was use lambdas as the thing you become. (By the way, those familiar with Scheme history should notice something interesting and familiar here, and for that same reason Dale and I are also in the shared company of being scolded by Carl Hewitt for saying our actors are made out of lambdas, despite him liking our systems otherwise, I think...)

I remarked off-hand that "well I guess one of the main differences between our systems, and maybe a thing you might not like, is that mine is lispy / based on Scheme, and..."

Dale waved his hand. "That's mostly surface..."

"Surface syntax, yeah I know. So I guess it doesn't..."

"No wait it does matter. What I'm trying to show you is that I actually do like that kind of stuff. In fact I have some projects which use it at a fundamental level. Here... let me show you..." And that's when we started talking about uFork, the instruction architecture he was working on, which I later found was actually part of a larger thing called Mycelia.

Well I'm glad I got the lecture directly from Dale because, let's see, how does the Mycelia project brand itself (at the time of writing)? "A bare-metal actor operating system for Raspberry Pi." Well, maybe this underwhelming self-description is why seemingly nobody I know (yes, like 3L above) has seemingly heard about it, despite it being extremely up the alley of the kind of programming people I tend to hang out with.

Mycelia is not just some throwaway raspberry pi project (which is certainly the impression I would have gotten from lazily scanning that page). Most of those are like, some cute repackaging of some existing FOSS POSIX'y thing. But Mycelia is an actually-working, you-can-actually-boot-it-on-real-hardware open source operating system (under Apache v2) with a ton of novel ideas which happens to be targeting the Raspberry Pi, but it could be ported to run on anything.

Anyway, there are a lot of interesting stuff in there, but here's a bulleted list summary. For Mycelia:

  • It's is an object-capability-secure operating system
  • It has a Lisp-like language for coding, pretty much Scheme-like, to hack around on
  • The Kernel language / Vau calculus show up, which is... wild
  • It encodes the Actor model and the Lambda calculus in a way that is sensible and coherent afaict
  • It is a "Lisp Machine" in many senses of the term.

But the uFork virtual machine / abstract idea for a CPU also are curious on their own. I dunno, I spent the other night reading it kind of wildly after our call. It also encodes the lambda calculus / actor model in fundamental instructions.

Dale was telling me he'd like to build an actual, physical CPU, but of course that takes a lot of resources, so he might settle for an FPGA for now. The architecture, should it be built, also somehow encodes a hardware garbage collector, which I haven't heard of anything doing that since the actual physical Lisp Machines died out.

At any rate, Dale was really excited to tell me about why his system encoded instructions operating on memory split in quads. He asked me why I thought that would be; I'm not honestly sharp enough in this kind of area to know, sadly, though I said "I hope it's not because you're planning on encoding RDF at the CPU layer". Thankfully it's not that, but then he started mentioning how his system encodes a stream of continuations...

Wait, that sounds familiar. "Have you ever heard of something called sectorlisp?" I asked, with a raised eyebrow.

"Scroll to the bottom of the document," Dale said, grinning.

Oh, there it was. Of course.

sectorlisp

The most technically impressive thing I think I've ever seen is John McCarthy's "Lisp implemented in Lisp", also known as a "metacircular evaluator". If you aren't familiar with it, it's been summarized well in the talk The Most Beautiful Program Ever Written by William Byrd. I think the best way to understand it really, and (I'm biased) the easiest to read version of things is in the Scheme in Scheme section of A Scheme Primer (though I wrote that for my work, and as said, I'm biased... I don't think I did anything new there, just explained ideas as simply as I could).

The second most technically impressive thing I've ever seen is sectorlisp, and the two are directly related. According to its README, "sectorlisp is a 512-byte implementation of LISP that's able to bootstrap John McCarthy's meta-circular evaluator on bare metal." Where traditional metacircular evaluator examples can be misconstrued as being the stuff of pure abstractlandia, sectorlisp gets brutally direct about things. In one sector (half a kilobyte!!!), sectorlisp manages to encode a whole-ass lisp system that actually runs. And yet, the nature of the metacircular evaluator persists. (Take that, person on Hacker News who called metacircular evaluators "cheating"! Even if you think mine was, I don't think you can accuse sectorlisp's of that.)

If you do nothing else, watch the sectorlisp blinkenlights demo, even just as an astounding visual demo alone. (Blinkenlights is another project of Justine's, and also wildly impressive.) I highly recommend the following blogposts of Justine's: SectorLISP Now Fits in One Sector, Lisp with GC in 436 bytes, and especially Lambda Calculus in 383 Bytes. Hikaru Ikuta (woodrush) has also written some amazing blogposts, including Extending SectorLISP to Implement BASIC REPLs and Games and Building a Neural Network in Pure Lisp Without Built-In Numbers Using Only Atoms and Lists (and related, but not part of sectorlisp: A Lisp Interpreter Implemented in Conway's Game of Life, which gave off strong Wireworld Computer vibes for me). If you are only going to lazily scan through one of those blogposts, I recommend it be Lambda Calculus in 383 Bytes, which has some wild representations of the ideas (including visually), a bit too advanced for me at present admittedly, though I stare at them in wonder.

I had a bunch more stuff here, partly because the author is someone I find both impressive technically but who has also said some fairly controversial things... to say the least. But I think it was too much of a digression for this article. The short version is that Justine's stuff is probably the smartest, most mind-blowing tech I've ever seen, kinda scarily and intimidatingly smart, and it's hard to mentally reconcile that with some of those statements. I don't know, maybe she wants to move past that phase, I'd like to think so. I think she hasn't said anything like that in a long time, and it feels out of phase with the rest of this post but... it feels like something that needs to be acknowledged.

GOOL, GOAL, and OpenGOAL

Every now and then when people say Lisp couldn't possibly be performant, Lisp people like to bring up that Naughty Dog famously had its own Lisp implementations for most of its earlier games. Andy Gavin has written about GOOL, which was a mix of lisp and assembly and of course lisp generating assembly, and I don't think much was written about its followup GOAL until OpenGOAL came up, which... I haven't looked at it too much tbh. I guess it's getting interesting for some people for the ability to play Jak and Daxter on modern hardware (which I've never played but looked fun), but I'm more curious if someone's gonna poke at it to do something completely different.

But I do know some of the vague legends. I don't remember if this is true or where I read it but one of them that Sony accused Naughty Dog of accessing some devkit or APIs they weren't supposed to have access to because they were pulling off a bunch of features performantly in Crash Bandicoot that were assumed otherwise not possible. But nope, just lisp nerds making DSLs that pull off some crazy shit I guess.

BONUS: Shoutout to Kandria

Oh, speaking of games written in Lisp, I guess Kandria's engine is gonna be FOSS, and that game looks wicked cool, maybe support their Kickstarter while you still can. It's not really in the theme of this post from the definition of "systems lisp" I gave earlier, but this blogpost about its lispy internals is really neat.

Okay here's everything else we're done

This was a long-ass post. There's other things maybe that could be discussed, so I'll just dump briefly:

Okay, that's it. Hopefully you found something interesting out of this post. Meanwhile, I was just gonna spend an hour on this. I spent my whole day! Eeek!

The Beginning of A Grueling Diet

By Christine Lemmer-Webber on Wed 13 July 2022

Today I made a couple of posts on the social medias, I will repost them in their entirety:

FUCK eating "DELICIOUS" food. I'm DONE eating delicious food.

What has delicious food ever done for me? Only made me want to eat more of it.

From now on I am only eating BORING-ASS GRUELS that I can eat as much as I want of which will not be much because they are BORING -- fediverse post birdsite post

And then:

I am making a commitment: I will be eating nothing but boring GRUELS until the end of 2022.

Hold me to it. fediverse post birdsite post

I am hereby committing to "a grueling diet" for the rest of 2022, as an experiment. Here are the rules:

  • Gruel, Pottage, and soup, in unlimited quantities. But gruel is preferred.
  • Fresh, steamed, pickled, and roasted fruit, vegetables, and tofu may be eaten in unlimited quantity. Unsweetened baking chocolate, cottage cheese are also permitted.
  • Tea, seltzer water, milk (any kind), and coffee are fine to have.
  • Pottage (including gruel) may be adorned as deemed appropriate, but not too luxuriously. Jams and molasses may be had for a nice breakfast or dessert, but not too much. Generally, only adorn pottages at mealtime; pottages in-between mealtime should be eaten with as sparing of additions as possible.
  • Not meaning to be rude to visitors, guests, hosts, and other such good company, exceptions to the diet may be made when visiting, being visited, or going on dates.

I will provide followups to this post throughout the year, justifying the diet, describing how it has affected my health, putting it in historical context, providing recipes, etc.

In the meanwhile, to the rest of 2022: may that it be grueling indeed!

Edit (2022-07-15): Added tofu to the list of acceptible things to additionally eat. Clarified gruel/pottage adornment advice. Added roasting as an acceptable processing method for fruit/vegetables/tofu.

Guile Steel: a proposal for a systems lisp

By Christine Lemmer-Webber on Sat 09 July 2022

Before we get into this kind of stream-of-consciousness outline, I'd like to note that very topically to this, over at the Spritely Institute (where I'm CTO, did I mention on here yet that I'm the CTO of a nonprofit to improve networked communication on the internet on this blog? because I don't think I did) we published a Scheme Primer, and the feedback to it has been just lovely. This post isn't a Spritely Institute thing (at least, not yet, though if its ideas manifested it could be possible we might use some of the tech), but since it's about Scheme, I thought I'd mention that.

This blogpost outlines something I've had kicking around in my head for a while: the desire for a modern "systems lisp", you know, kind of like Rust, except hopefully much better than Rust, and in Lisp. (And, if it turns out to be for not other reason, it might simply be better by being written in a Lisp.) But let's be clear: I haven't written anything, this blogpost is a ramble, it's just kind of a set of feelings about what I'd like, what I think is possible.

Let's open by saying that there's no real definition of what a "systems language" is... but more or less what people mean is, "something like C". In other words, what people nowadays consider a low-level language, even though C used to be considered a high level language. And what people really mean is: it's fast, it's statically typed, and it's really for the bit-fiddling types of speed demons out there.

Actually, let's put down a few asides for a moment. People have conflated two different benefits fo "statically typed" languages because they've mostly been seen together:

  • Static typing for ahead-of-time more-correct programs
  • Static typing for faster or leaner programs (which subdivides in terms of memory and CPU benefits, more or less)

In the recent FOSS & Crafts episode What is Lisp? we talk a bit about how the assumptions that dynamically typed languages are "slow" is really due to lack of hardware support, and that lisp machines actually had hardware support directly (tagged memory architecture and hardware garbage collection) and even wrote low-level parts of their systems like the "graphics drivers" directly in lisp, and it was plenty fast, and that it would even be possible to have co-processors on which dynamic code (not just lisp) ran at "native speed" (this is what the MacIvory did), but this is all somewhat of an aside because that's not the world we live in. So as much as I, Christine, would love to have tagged architecture (co-)processors, they probably won't happen, except there's some RISC-V tagged architecture things but I don't think they've gotten very far and they seem mostly motivated by a security model that doesn't make any sense to me. But I'd love to be wrong on this! I would like tagged RISC-V to succeed! But still, there's the problem of memory management, and I don't think anyone's been working on a hardware garbage collector or if that would really be a better thing anyway.

The fact is, there's been a reinforcing effect over the last several decades since the death of the lisp machine: CPUs are optimized for C, and C is optimized for CPUs, and both of them try to optimize for each other. So "systems programming" really means "something like C" because that's what our CPUs like because that's what our languages like and these are pretty much re-inforcing.

And besides, C is basically the lingua franca of programming languages, right? If you want to make something widely portable, you target the C ABI, because pretty much all programming languages have some sort of C FFI toolkit thing or just make C bindings, and everyone is happy. Except, oh wait, C doesn't actually have an ABI! Well, okay, I guess not, but it doesn't matter because the C ABI triples, that's what the world works with.

Well also, you gotta target the web, right? And actually the story there is a bit nicer because WebAssembly is actually kinda awesome, and the hope and dream is that all programming languages in some way or another target WebAssembly, and then "you gotta write your thing in Javascript because it's the language of the web!!!" is no longer a thing I have to hear anymore. (Yes, all my friends who work on Javascript, I appreciate you for making it the one programming language which has mostly gotten better over time... hopefully it stays that way, and best of luck.) But the point is, any interesting programming language these days should be targeting Webassembly, and hopefully not just via Emscripten, but hopefully via actually targeting Webassembly directly.

So okay, we have at least two targets for our "system language": C, or something that is C-compatible, and Webassembly. And static type analysis in terms of preventing errors, that's also a useful thing, I won't deny it. (I think the division of "statically typed" and "dynamically typed" languages is probably more of a false one than we tend to think, but that's a future blogpost, to be written.) And these days, it's also how you get speed while also being maximally bit-twiddly fast, because that's how our machines (including the abstract one in Webassembly) are designed. So okay, grumbling about conflating two things aside, let's run with that.

So anyway, I promised to write about this "Guile Steel" thing I've been musing about, and we've gotten this far in the article, and I haven't yet. So, this is, more than a concrete proposal, a call to arms to implement just such a systems language for Guile. I might make a prototype at some point, but you, dear reader, are free to take the idea of "Guile Steel" and run with it. In fact, please do.

So anyway. First, about the name. It's probably pretty obvious based on the name that I'm suggesting this be a language for Guile Scheme. And "Guile" as a name itself is both a continuation of the kind of playfully mischevious names in the Scheme family and its predecessors, but also a pun on co-founder of the Scheme language, Guy L. Steele. So "Guile Steele" kinda brings that pun home, and "Steel" sounds low-level, close to the metal.

But also, Guile has a lovely compiler tower. It would be nice to put some more lovely things on it! Why not a systems language?

There's some precedent here. The lovely Scheme 48's lowest levels of code (including its garbage collector) are written in an interesting language called PreScheme (more on PreScheme), which is something that's kind of like Scheme, but not really. It doesn't do automatic garbage collection itself, and I think Rust has shown that this area could be improved for a more modern PreScheme system. But you can hack on it at the REPL, and then it can compile to C, and it also has an implementation on Common Lisp, so you can bootstrap it a few different ways. PreScheme uses a Hindley-Milner type system; I suspect we can do even better with a propagator approach but that's untested. Anyway, starting by porting PreScheme from Scheme48 to Guile directly would be a good way to get going.

Guile also has some pretty good reasons to want something like this. For one thing, if you're a Guile person, then by gosh you're probably a Guix person. And Rust, it's real popular these days, and for good reasons, we're all better of with less memory vulnerabilities in our lives, but you know... it's kind of a pain, packaging wise, I hear? Actually I've never tried packaging anything in Rust but Efraim certainly has and when your presentation starts with the slide "Packaging Rust crates in GNU Guix: How hard could it possibly be?" I guess the answer is going to be that it's a bit of a headache. So maybe it's not the end of the world, but I think it might be nice if on that ground we had our own alternative, but that's just a minor thing.

And I don't think there's anything wrong with Rust, but I'd love to see... can we do better? I feel like it could be hackable, accessible, and it also could, probably, be a lot of fun? That's a good reason, I know I'd like something like this myself, I'd like to play with it, I'd like to be able to use it.

But maybe also... well, let's not beat around the bush, a whole lot of Guile is written in C, and our dear wonderful Andy Wingo has done a lot of lovely things to make us less dependent on C, some half-straps and some baseline compilers and just rewriting a lot of stuff in Scheme and so on and so forth but it would be nice if we had something we could officially rally around as "hey this is the thing we're going to start rewriting things in", because you know, C really is kind of a hard world to trust, and I'd like the programming language environment I rely on to not be so heavily built on it.

And at this point in the article, I have to say that Xerz! pointed out that there is a thing called Carp which is indeed a lisp that compiles to C and you know what, I'm pretty embarassed for having not paid attention to it... I certainly saw it linked at one point but didn't pay enough attention, and... maybe it needs a closer look. Heck, it's written in Haskell, which is a pretty cool choice.

But hey, the Guile community still deserves a thing of its own, right? What do we have that compiler tower for if we're not going to add some cool things to it? And... gosh, I'd really like to get Guile in the browser, and there are some various paths, and Wingo gave a fun presentation on compiling to Webassembly last year, but wouldn't it be nice if just our whole language stack was written in something designed to compile to either something C-like or... something?

I might do some weekend fiddling towards this direction, but sadly this can't be my main project. As a call to arms, maybe it inspires someone to take it up as theirs though. I will say that if you work on it, I promise to spend some time using whatever you build and trying it out and sending patches. So that's it, that's my stream-of-consciousness post on Guile Steel: currently an idea... maybe eventually a reality?

Site converted to Haunt

By Christine Lemmer-Webber on Tue 05 July 2022

Lo and behold, I've converted the last of the sites I've been managing for ages to Haunt.

Haunt isn't well known. Apparently I am responsible for, er, many of the sites listed on awesome.haunt.page. But you know what? I've been making website things for a long time, and Haunt is honestly the only static site generator I've worked with (and I've worked with quite a few) that's actually truly customizable and programmable and pleasant to work with. And hackable!

This site has seen quite a few iterations... some custom code when I first launched it some time ago, then I used Zine, then I used PyBlosxom, and for quite a few years everything was running on Pelican. But I never liked hacking on any of those... I always kind of begrudgingly opened up the codebase and regretted having to change anything. But Haunt? Haunt's a dream, it's all there and ready for you, and I've even gotten some patches upstream. (Actually I owe Dave a few more, heh.)

Everything is Scheme in Haunt, which means, for instance, that this page needed an archive page for ages that actually worked and was sensible and I just didn't ever feel like doing it. But in Haunt, it's just delicious Guile flavored Scheme:

(define (archive-tmpl site posts)
  ;; build a map of (year -> posts)
  (define posts-by-year
    (let ((ht (make-hash-table)))      ; hash table we're building up
      (do ((posts posts (cdr posts)))  ; iterate over all posts
          ((null? posts) ht)           ; until we're out of posts
        (let* ((post (car posts))                   ; put this post in year bucket
               (year (date-year (post-date post)))
               (year-entries (hash-ref ht year '())))
          (hash-set! ht year (cons post year-entries))))))
  ;; sort all the years
  (define sorted-years
    (sort (hash-map->list (lambda (k v) k) posts-by-year) >))
  ;; rendering for one year
  (define (year-content year)
    `(div (@ (style "margin-bottom: 10px;"))
          (h3 ,year)
          (ul ,@(map post-content
                     (posts/reverse-chronological
                      (hash-ref posts-by-year year))))))
  ;; rendering for one post within a year
  (define (post-content post)
    `(li
      (a (@ (href ,(post-uri site post)))
         ,(post-ref post 'title))))
  ;; the whole page
  (define content
    `(div (@ (class "entry"))
          (h2 "Blog archive (by year)")
          (ul ,@(map year-content sorted-years))))
  ;; render within base template
  (base-tmpl site content))

Lambda, the ultimate static site generator!

At any rate, I expect some things are broken, to be fixed, etc. Let me know if you see 'em. Heck, you can browse the site contents should you be so curious!

But is there really anything more boring than a meta "updated my website code" post like this? Anyway, in the meanwhile I've corrected straggling instances of my deadname which were sitting around. The last post I made was me coming out as trans, and... well a lot has changed since then. So I guess I've got some more things to write. And also this whole theme... well I like some of it but I threw it together when I was but a wee web developer, back before CSS was actually nice to write, etc. So maybe I need to overhaul the look and feel too. And I always meant to put in that project directory, and ooh maybe an art gallery, and so on and so on...

But hey, I like updating my website again! So maybe I actually will!

Hello, I'm Christine Lemmer-Webber, and I'm nonbinary trans-femme

By Christine Lemmer-Webber on Mon 28 June 2021

NOTE: This post is out of date. I no longer go by "Chris", I only go by Christine at this point. See my newer blogpost about my transition for more information. I keep this here as a preservation of history and the journey of which I've undergone. I've updated the title to switch "Chris" -> "Christine", but otherwise I've left the rest as it was originally written.

A picture of Christine and Morgan together

I recently came out as nonbinary trans-femme. That's a picture of me on the left, with my spouse Morgan Lemmer-Webber on the right.

In a sense, not much has changed, and so much has changed. I've dropped the "-topher" from my name, and given the common tendency to apply gender to pronouns in English, please either use nonbinary pronouns or feminine pronouns to apply to me. Other changes are happening as I wander through this space, from appearance to other things. (Probably the biggest change is finally achieving something resembling self-acceptance, however.)

If you want to know more, Morgan and I did a podcast episode which explains more from my present standing, and also explains Morgan's experiences with being demisexual, which not many people know about! (Morgan has been incredible through this whole process, by the way.)

But things may change further. Maybe a year from now those changes may be even more drastic, or maybe not. We'll see. I am wandering, and I don't know where I will land, but it won't be back to where I was.

At any rate, I've spent much of my life not being able to stand myself for how I look and feel. For most of my life, I have not been able to look at myself in a mirror for more than a second or two due to the revulsion I felt at the person I saw staring back at me. The last few weeks have been a shift change for me in that regard... it's a very new experience to feel so happy with myself.

I'm only at the beginning of this journey. I'd appreciate your support... people have been incredibly kind to me by and large so far but like everyone who goes through a process like this, it's very hard in those experiences where people aren't. Thank you to everyone who has been there for me so far.

Beyond the shouting match: what is a blockchain, really?

By Christine Lemmer-Webber on Sat 24 April 2021

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet and making the rich richer and the poor poorer and generally all the other kinds of fights that people like to have about capitalism (also a quasi-vague word occupying some hotly contested mental real estate).

There are good reasons to hold opinions about various aspects of what are called "blockchains", and I too have some pretty strong opinions I'll be getting into in a followup article. The followup article will be about "cryptocurrencies", which many people also seem to think of as synonymous with "blockchains", but this isn't particularly true either, but we'll deal with that one then.

In the meanwhile, some of the fighting on the internet is kind of confusing, but even more importantly, kind of confused. Some of it might be what I call "sportsballing": for whatever reason, for or against blockchains has become part of your local sportsball team, and we've all got to be team players or we're gonna let the local team down already, right? And the thing about sportsballing is that it's kind of arbitrary and it kind of isn't, because you might pick a sportsball team because you did all your research or you might have picked it because that just happens to be the team in your area or the team your friends like, but god almighty once you've picked your sportsball team let's actually not talk against it because that might be giving in to the other side. But sportsballing kind of isn't arbitrary either because it tends to be initially connected to real communities of real human beings and there's usually a deeper cultural web than appears at surface level, so when you're poking at it, it appears surface-level shallow but there are some real intricacies beneath the surface. (But anyway, go sportsball team.)

But I digress. There are important issues to discuss, yet people aren't really discussing them, partly because people mean different things. "Blockchain" is a strange term that encompasses a wide idea space, and what people consider or assume essential to it vary just as widely, and thus when two people are arguing they might not even be arguing about the same thing. So let's get to unpacking.

"Blockchain" as handwaving towards decentralized networks in general

Years ago I was at a conference about decentralized networked technology, and I was having a conversation with someone I had just met. This person was telling me how excited they were about blockchains... finally we have decentralized network designs, and so this seems really useful for society!

I paused for a moment and said yes, blockchains can be useful for some things, though they tend to have significant costs or at least tradeoffs. It's good that we also have other decentralized network technology; for example, the ActivityPub standard I was involved in had no blockchains but did rely on the much older "classic actor model."

"Oh," the other person said, "I didn't know there were other kinds of decentralized network designs. I thought that 'blockchain' just meant 'decentralized network technology'."

It was as if a light had turned on and illuminated the room for me. Oh! This explained so many conversations I had been having over the years. Of course... for many people, blockchains like Bitcoin were the first ever exposure they had (aside from email, which maybe they never gave much thought to as being decentralized) of something that involved a decentralized protocol. So for many people, "blockchain" and "decentralized technology" are synonyms, if not in technical design, but in terms of understanding of a space.

Mark S. Miller, who was standing next to me, smiled and gave a very interesting followup: "There is only one case in which you need a blockchain, and that is in a decentralized system which needs to converge on a single order of events, such as a public ledger dealing with the double spending problem."

Two revelations at once. It was a good conversation... it was a good start. But I think there's more.

Blockchains are the "cloud" of merkle trees

As time has gone on, the discourse over blockchains has gotten more dramatic. This is partly because what a "blockchain" is hasn't been well defined.

All terminology exists on an ever-present battle between fuzziness and crispness, with some terms being much clearer than others. The term "boolean" has a fairly crisp definition in computer science, but if I ask you to show me your "stove", the device you show me today may be incomprehensible to someone's definition a few centuries ago, particularly in that today it might not involve fire. Trying to define as in terms of its functionality can also cause confusion: if I asked you to show me a stove, and you showed me a computer processor or a car engine, I might be fairly confused, even though technically people enjoy showing off that they can cook eggs on both of these devices when they get hot enough. (See also: Identity is a Katamari, language is a Katamari explosion.)

Still, some terms are fuzzier than others, and as far as terms go, "blockchain" is quite fuzzy. Hence my joke: "Blockchains are the 'cloud' of merkle trees."

This ~joke tends to get a lot of laughs out of a particular kind of audience, and confused looks from others, so let me explain. The one thing everyone seems to agree on is that it's a "chain of blocks", but all that really seems to mean is that it's a merkle tree... really, just an immutable datastructure where one node points at the parent node which points at the parent node all the way up. The joke then is not that this merkle tree runs on a cloud, but that "cloud computing" means approximately nothing: it's marketing speak for some vague handwavey set of "other peoples' computers are doing computation somewhere, possibly on your behalf sometimes." Therefore, "cloud of merkle trees" refers to the vagueness of the situation. (As everyone knows, jokes are funnier when fully explained, so I'll turn on my "STUDIO LAUGHTER" sign here.)

So, a blockchain is a chain of blocks, ie a merkle tree, and I mean, technically speaking, that means that Git is a blockchain (especially if the commits are signed), but when you see someone arguing on the internet about whether or not blockchains are "good" or "bad", they probably weren't thinking about git, which aside from having a high barrier of entry in its interface and some concerns about the hashing algorithm used, isn't really something likely to drag you into an internet flamewar.

"Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue"

These days it's common to see people either heaping praises on blockchains or criticizing them, and those people tend to be shouting past one another. I'll save unpacking that for another post. In the meanwhile though, it's worth noting that people might not be talking about the same things.

What isn't in doubt is whether or not Bitcoin is a blockchain... trying to understand and then explore the problem space around Bitcoin is what created the term "blockchain". It's a bit like the video game genre of roguelikes, which started with the game Rogue, particularly explored and expanded upon in NetHack, and then suddenly exploding into the indie game scene as a "genre" of its own. Except the genre has become fuzzier and fuzzier as people have explored the surrounding space. What is essential? Is a grid based layout essential? Is a non-euclidean grid acceptable? Do you have to provide an ascii or ansi art interface so people can play in their terminals? Dare we allow unicode characters? What if we throw out terminals altogether and just play on a grid of 2d pixelart? What about 3d art? What about permadeath? What about the fantasy theme? What about random level generation? What are the key features of a roguelike?

Well now we're at the point where I pick up a game like Blazing Beaks and it calls itself a "roguelite", which I guess is embracing the point that terminology has gotten extremely fuzzy... this game feels more like Robotron than Rogue.

So... if "blockchain" is to Bitcoin what "roguelike" is to Rogue, then what's essential to a blockchain? Does the blockchain have to be applied to a financial instrument, or can it be used to store updateable information about eg identity? Is global consensus required? Or what about a "trusted quorum" of nodes, such as in Hyperledger? Is "mining" some kind of asset a key part of the system? Is proof of work acceptable, or is proof of stake okay? What about proof of space, proof of space-time, proof of pudding?

On top of all this, some of the terms around blockchains have been absorbed as if into them. For instance, I think to many people, "smart contract" means something like "code which runs on a blockchain" thanks to Ethereum's major adoption of the term, but the E programming language described "smart contracts" as the "likely killer app of distributed capabilities" all the way back in 1999, and was borrowing the term from Nick Szabo, but really the same folks working on E had described many of those same ideas in the Agoric Papers back in 1988. Bitcoin wasn't even a thing at all until at least 2008, so depending on how you look at it, "smart contracts" precede "blockchains" by one or two decades. So "blockchain" has somehow even rolled up terms outside of its space as if within it. (By the way, I don't think anyone has given a good and crisp definition for "smart contract" either despite some of these people trying to give me one, so let me give you one that I think is better and embraces its fuzziness: "Smart contracts allow you to do the kinds of things you might do with legal contracts, but relying on networked computation instead of a traditional state-based legal system." It's too bad more people also don't know about the huge role that Mark Miller's "split contracts" idea plays into this space because that's what makes the idea finally makes sense... but that's a conversation for another time.) (EDIT: Well, after I wrote this, Kate Sills lent me her definition, which I think is the best one: "Smart contracts are credible commitments using technology, and outside a state-provided legal system." I like it!)

So anyway, the point of this whole section is to say that kind of like roguelike, people are thinking of different things as essential to blockchains. Everyone roughly agrees on the jumping-off point of ideas but since not everyone agrees from there, it's good to check in when we're having the conversation. Wait, you do/don't like this game because it's a roguelike? Maybe we should check in on what features you mean. Likewise for blockchains. Because if you're blaming blockchains for burning down the planet, more than likely you're not condemning signed git repositories (or at least, if you're condemning them, you're probably doing so about it from an aspect that isn't the fundamental datastructure... probably).

This is an "easier said than done" kind of thing though, because of course, I'm kind of getting into some "in the weeds" level of details here... but it's the "in the weeds" where all the substance of the disagreements really are. The person you are talking with might not actually even know or consider the same aspects to be essential that you consider essential though, so taking some time to ask which things we mean can help us lead to a more productive conversation sooner.

"Blockchain" as an identity signal

First, a digression. One thing that's kind of curious about the term "virtue signal" is that in general it tends to be used as a kind of virtue signal. It's kind of like the word hipster in the previous decade, which weirdly seemed to be obsessively and pejoratively used by people who resembled hipsters than anyone else. Hence I used to make a joke called "hipster recursion", which is that since hipsters seem more obsessesed with pejorative labeling of hipsterism than anyone else, there's no way to call someone a "hipster" without yourself taking on hipster-like traits, and so inevitably even this conversation is N-levels deep into hipster recursion for some numerical value of N.

"Virtue signaling" appears similar, but even more ironically so (which is a pretty amazing feat given how much of hipsterdom seems to surround a kind of inauthentic irony). When I hear someone say "virtue signaling" with a kind of sneer, part of that seems to be acknowledging that other people are sending signals merely to impress others that they are some kind of the same group but it seems as if it's being raised as in a you-know-and-I-know-that-by-me-acknowledging-this-I'm-above-virtue-signaling kind of way. Except that by any possible definition of virtue signaling, the above appears to be a kind of virtue signaling, so now we're into virtue signaling recursion.

Well, one way to claw our way out of the rabbithole of all this is to drop the pejorative aspect of it and just acknowledge that signaling is something that everyone does. Hence me saying "identity signaling" here. You can't really escape identity signaling, or even sportsballing, but you can acknowledge that it's a thing that we all do, and there's a reason for it: people only have so much time to find out information about each other, so they're searching for clues that they might align and that, if they introduce you to their peer group, that you might align with them as well, without access to a god-like view of the universe where they know exactly what you think and exactly what kinds of things you've done and exactly what way you'll behave in the future or whether or not you share the same values. (After all, what else is virtue ethics but an ethical framework that takes this in its most condensed form as its foundation?) But it's true that at its worst, this seems to result in shallow, quick, judgmental behavior, usually based on stereotypes of the other side... which can be unfortunate or unfair to whomever is being talked about. But also on the flip side, people also do identity signal to each other because they want to create a sense of community and bonding. That's what a lot of culture is. It's worth acknowledging then that this occurs, recognizing its use and limitations, without pretending that we are above it.

So wow, that's quite a major digression, so now let's get back to "identity signaling". There is definitely a lot of identity signaling that tends to happen around the word "blockchain", for or against. Around the critiques of the worst of this, I tend to agree: I find much of the machismo hyper-white-male-privilege that surrounds some of the "blockchain" space uncomfortable or cringey.

But I also have some close friends who are not male and/or are people of color and those ones tend to actually suffer the worst of it from these communities internally, but also seem to find things of value in them, but particularly seem to feel squeezed externally when the field is reduced to these kinds of (anti?-)patterns. There's something sad about that, where I see on the one hand friends complaining about blockchain from the outside on behalf of people who on the inside seem to be both struggling internally but then kind of crushed by being lumped into the same identified problems externally. This is hardly a unique problem but it's worth highlighting for a moment I think.

But anyway, I've taken a bunch of time on this, more than I care to, maybe because (irony again?) I feel that too much of public conversation is also hyperfocusing on this aspect... whether there's a subculture around blockchain, whether or not that subculture is good or bad, etc. There's a lot worthwhile in unpacking this discourse-wise, but some of the criticisms of blockchains as a technology (to the extent it even is coherently one) seem to get lumped up into all of this. It's good to provide thoughtful cultural critique, particularly one which encourages healthy social change. And we can't escape identity signaling. But as someone who's trying to figure out what properties of networked systems we do and don't want, I feel like I'm trying to navigate the machine and for whatever reason, my foot keeps getting caught in the gears here. Well, maybe that itself is pointing to some architectural mistakes, but socially architectural ones. But it's useful to also be able to draw boundaries around it so that we know where this part of the conversation begins and ends.

"Blockchain" as "decentralized centralization" (or "decentralized convergence")

One of the weird things about people having the idea of "blockchains" as being synonymous with "decentralization" is that it's kind of both very true and very untrue, depending on what abstraction layer you're looking at.

For a moment, I'm going to frame this in harsh terms: blockchains are decentralized centralization.

What? How dare I! You'll notice that this section is in harsh contrast to the "blockchain as handwaving towards decentralized networks in general" section... well, I am acknowledging the decentralized aspect of it, but the weird thing about a blockchain is that it's a decentralized set of nodes converging on (creating a centrality of!) a single abstract machine.

Contrast with classic actor model systems like CapTP in Spritely Goblins, or as less good examples (because they aren't quite as behavior-oriented as they are correspondence-oriented, usually) ActivityPub or SMTP (ie, email). All of these systems involve decentralized computation and collaboration stemming from sending messages to actors (aka "distributed objects"). Of CapTP this is especially clear and extreme: computations happen in parallel across many collaborating machines (and even better, many collaborating objects on many collaborating machines), and the behavior of other machines and their objects is often even opaque to you. (CapTP survives this in a beautiful way, being able to do well on anonymous, peer to peer, "mutually suspicious" networks. But maybe read my rambling thoughts about CapTP elsewhere.)

While to some degree there are some very clever tricks in the world of cryptography where you may be able to get back some of the opacity, this tends to be very expensive, adding an expensive component to the already inescapable additional expenses of a blockchain. A multi-party blockchain with some kind of consensus will always, by definition be slower than a single machine operating alone.

If you are irritated by this framing: good. It's probably good to be irritated by it at least once, if you can recognize the portion of truth in it. But maybe that needs some unpacking to get there. It might be better to say "blockchains are decentralized convergence", but I have some other phrasing that might be helpful.

"Blockchain" as "a single machine that many people run"

There's value in having a single abstract machine that many people run. The most famous source of value is in the "double spending problem". How do we make sure that when someone has money, they don't spend that money twice?

Traditional accounting solves this with a linear, sequential ledger, and it turns out that the right solution boils down to the same thing in computers. Emphasis on sequential: in order to make sure money balances out right, we really do have to be able to order things.

Here's the thing though: the double spending problem was in a sense solved in terms of single-computers a long time ago in the object capability security community. Capability-based Financial Instruments was written about a decade before blockchains even existed and showed off how to make a "mint" (kind of like a fiat-currency bank) that can be implemented in about 25 lines of code in the right architecture (I've ported it to Goblins, for instance) and yet has both distributed accounts and is robust against corruption on errors.

However, this seems to be running on a "single-computer based machine", and again operates like a fiat currency. Anyone can create their own fiat currency like this, and they are cheap, cheap, cheap (and fast!) to make. But it does rely on sequentiality to some degree to operate correctly (avoiding a class of attacks called "re-entrancy attacks").

But this "single-computer based machine" might bother you for a couple reasons:

  • We might be afraid the server might crash and service will be interrupted, or worse yet, we will no longer be able to access our accounts.

  • Or, even if we could trade these on an open market, and maybe diversify our portfolio, maybe we don't want to have to trust a single operator or even some appointed team of operators... maybe we have a lot of money in one of these systems and we want to be sure that it won't suddenly vanish due to corruption.

Well, if our code operates deterministically, then what if from the same initial conditions (or saved snapshot of the system) we replay all input messages to the machine? Functional programmers know: we'll end up with the same result.

So okay, we might want to be sure this doesn't accidentally get corrupted, maybe for backup reasons. So maybe we submit the input messages to two computers, and then if one crashes, we just continue on with the second one until the other comes up, and then we can restore the first one from the progress the second machine made while the first one was down.

Oh hey, this is already technically a blockchain. Except our trust model is that we implicitly trust both machines.

Hm. Maybe we're now worried that we might have top-down government pressure to coerce some behavior on one of our nodes, or maybe we're worried that someone at a local datacenter is going to flip some bits to make themselves rich. So we actually want to spread this abstract machine out over three countries. So okay, we do that, and now we set a rule agreeing on what all the series of input messages are... if two of three nodes agree, that's good enough. Oh hey look, we've just invented the "small-quorum-style" blockchain/ledger!

(And yes, you can wire up Goblins to do just this; a hint as to how is seen in the Terminal Phase time travel demo. Actually, let's come back to that later.)

Well, okay. This is probably good enough for a private financial asset, but what about if we want to make something more... global? Where nobody is in charge!

Well, we could do that too. Here's what we do.

  • First, we need to prevent a "swarming attack" (okay, this is generally called a "sybil attack" in the literature, but for a multitude of reasons I won't get into, I don't like that term). If a global set of peers are running this single abstract machine, we need to make sure there aren't invocations filling up the system with garbage, since we all basically have to keep that information around. Well... this is exactly where those proof-of-foo systems come in the first time; in fact Proof of Work's origin is in something called Hashcash which was designed to add "friction" to disincentivize spam for email-like systems. If we don't do something friction-oriented in this category, our ledger is going to be too easily filled with garbage too fast. We also need to agree on what the order of messages is, so we can use this mechanism in conjuction with a consensus algorithm.

  • When are new units of currency issued? Well, in our original mint example, the person who set up the mint was the one given the authority to make new money out of thin air (and they can hand out attenuated versions of that authority to others as they see fit). But what if instead of handing this capability out to individuals we handed it out to anyone who can meet an abstract requirement? For instance, in zcap-ld an invoker can be any kind of entity which is specified with linked data proofs, meaning those entities can be something other than a single key... for instance, what if we delegated to an abstract invoker that was specified as being "whoever can solve the state of the machine's current proof-of-work puzzle"? Oh my gosh! We just took our 25-line mint and extended it for mining-style blockchains. And the fundamental design still applies!

With these two adjustments, we've created a "public blockchain" akin to bitcoin. And we don't need to use proof-of-work for either technically... we could swap in different mechanisms of friction / qualification.

If the set of inputs are stored as a merkle tree, then all of the system types we just looked at are technically blockchains:

  • A second machine as failover in a trusted environment

  • Three semi-trusted machines with small-scale private consensus

  • A public blockchain without global trust, with swarming-attack resistance and an interesting abstract capability accessible to anyone who can meet the abstract requirement (in this case, to issue some new currency).

The difference for choosing any of the above is really a question of: "what is your trust/failover requirements?"

Blockchains as time travel plus convergent inputs

If this doesn't sound believable to you, that you could create something like a "public blockchain" on top of something like Goblins so easily, consider how we might extend time travel in Terminal Phase to add multiplayer. As a reminder, here's an image:

Time travel in Spritely Goblins shown through Terminal Phase

Now, a secret thing about Terminal Phase is that the gameplay is deterministic (the random starfield in the background is not, but the gameplay is) and runs on a fixed frame-rate. This means that given the same set of keyboard inputs, the game will always play the same, every time.

Okay, well let's say we wanted to hand some way for someone to replay our last game. Chess games can be fully replayed with a very condensed syntax, meaning that merely handing someone a short list of codes they can precisely replay the same game, every time, deterministically.

Well okay, as a first attempt at thinking this through, what if for some game of Terminal Phase I played we wrote down each keystroke I entered on my keyboard, on every tick of the game? Terminal Phase runs at 30 ticks per second. So okay, if you replay these, each one at 30 ticks per second, then yeah, you'd end up with the same gameplay every time.

It would be simple enough for me to encode these as a linked list (cons, cons, cons!) and hand them to you. You could descend all the way to the root of the list and start playing them back up (ie, play the list in reverse order) and you'd get the same result as I did. I could even stream new events to you by giving you new items to tack onto the front of the list, and you could "watch" a game I was playing live.

So now imagine that you and I want to play Terminal Phase together now, over the network. Let's imagine there are two ships, and for simplicity, we're playing cooperatively. (The same ideas can be extended to competitive, but for narrating how real-time games work it's easier to to start with a cooperative assumption.)

We could start out by wiring things up on the network so that I am allowed to press certain keys for player 1 and you are allowed to press certain keys for player 2. (Now it's worth noting that a better way to do this doesn't involve keys on the keyboard but capability references, and really that's how we'd do things if we were to bring this multiplayer idea live, but I'm trying to provide a metaphor that's easy to think about without introducing the complicated sounding kinds of terms like "c-lists" and "vat turns" that we ocap people seem to like.) So, as a first attempt, maybe if we were playing on a local area network or something, we could synchronize at every game tick: I share my input with you and you share yours, and then and only then do both of our systems actually input them into that game-tick's inputs. We'll have achieved a kind of "convergence" as to the current game state on every tick. (EDIT: I wrote "a kind of consensus" instead of "a kind of convergence" originally, and that was an error, because it misleads on what consensus algorithms tend to do.)

Except this wouldn't work very well if you and I were living far away from each other and playing over the internet... the lag time for doing this for every game tick might slow the system to a crawl... our computers wouldn't get each others' inputs as fast as the game was moving along, and would have to pause until we received each others' moves.

So okay, here's what we'll do. Remember the time-travel GUI above? As you can see, we're effectively restoring from an old snapshot. Oh! So okay. We could save a snapshot of the game every second, and then both get each other our inputs to each other as fast as we can, but knowing it'll lag. So, without having seen your inputs yet, I could move my ship up and to the right and fire (and send that I did that to you). My game would be in a "dirty state"... I haven't actually seen what you've done yet. Now suddenly I get the last set of moves you did over the network... in the last five frames, you move down and to the left and fire. Now we've got each others' inputs... what our systems can do is secretly time travel behind the scenes to the last snapshot, then fast forward, replaying both of our inputs on each tick up until the latest state where we've both seen each others' moves (but we wouldn't show the fast forward process, we'd just show the result with the fast forward having been applied). This can happen fast enough that I might see your ship jump forward a little, and maybe your bullet will kill the enemy instead of mine and the scores shift so that you actually got some points that for a moment I thought I had, but this can all happen in realtime and we don't need to slow down the game at all to do it.

Again, all the above can be done, but with actual wiring of capabilities instead of the keystroke metaphor... and actually, the same set of ideas can be done with any kind of system, not just a game.

And oh hey, technically, technically, technically if we both hashed each of our previous messages in the linked list and signed each one, then this would qualify as a merkle tree and then this would also qualify as a blockchain... but wait, this doesn't have anything to do with cryptocurrencies! So is it really a blockchain?

"Blockchain" as synonym for "cryptocurrency" but this is wrong and don't do this one

By now you've probably gotten the sense that I really was annoyed with the first section of "blockchain" as a synonym for "decentralization" (especially because blockchains are decentralized centralization/convergence) and that is completely true. But even more annoying to me is the synonym of "blockchain" with "cryptocurrency".

"Cryptocurrency" means "cryptographically based currency" and it is NOT synonymous with blockchains. Digicash precedes blockchains by a dramatic amount, but it is a cryptocurrency. The "simple mint" type system also precedes blockchains and while it can be run on a blockchain, it can also run on a solo computer/machine.

But as we saw, we could perceive multiplayer Terminal Phase as technically, technically a blockchain, even though it has nothing to do with currencies whatsoever.

So again a blockchain is just a single, abstract, sequential machine, run by multiple parties. That's it. It's more general than cryptocurrencies, and it's not exclusive to implementing them either. One is a kind of programming-plus-cryptography-use-case (cryptocurrencies), the other one is a kind of abstracted machine (blockchains).

So please. They are frequently combined, but don't treat them as the same thing.

Blockchains as single abstract machines on a wider network

One of my favorite talks is Mark Miller's Programming Secure Smart Contracts talk. Admittedly, I like it partly because it well illustrates some of the low-level problems I've been working on, and that might not be as useful to everyone else. But it has this lovely diagram in it:

Machines / Vats / Ocaps / Erights layers of abstractions

This is better understood by watching the video, but the abstraction layers described here are basically as follows:

  • "Machines" are the lowest layer of abstraction on the network, but there a variety of kinds of machines. Public blockchains are one, quorum blockchains are another, solo computer machines yet another (and the simplest case, too). What's interesting then is that we can see public chains and quorums abstractly demonstrated as machines in and of themselves... even though they are run by many parties.

  • Vats are the next layer of abstraction, these are basically the "communicating event loops"... actors/objects live inside them, and more or less these things run sequentially.

  • Replace "JS ocaps" with "language ocaps" and you can see actors/objects in both Javascript and Spritely living here.

  • Finally, at the top are "erights" and "smart contracts", which feed into each other... "erights" are "exclusive electronic rights", and "smart contracts" are generally patterns of cooperation involving achieving mutual goals despite suspicion, generally involving the trading of these erights things (but not necessarily).

Okay, well cool! This finally explains the worldview I see blockchains on. And we can see a few curious things:

  • The "public chain" and "quorum" kinds of machines still boil down to a single, sequential abstract machine.

  • Object connections exist between the machines... ocap security. No matter whether it's run by a single computer or multiple.

  • Public blockchains, quorum blockchains, solo-computer machines all talk to each other, and communicate between object references on each other.

Blockchains are not magical things. They are abstracted machines on the network. Some of them have special rules that let whoever can prove they qualify for them access some well-known capabilities, but really they're just abstracted machines.

And here's an observation: you aren't ever going to move all computation to a single blockchain. Agoric's CEO, Dean Tribble, explained beautifully why on a recent podcast:

One of the problems with Ethereum is it is as tightly coupled as possible. The entire world is a single sequence of actions that runs on a computer with about the power of a cell phone. Now, that's obviously hugely valuable to be able to do commerce in a high-integrity fashion, even if you can only share a cell phone's worth of compute power with the entire rest of the world. But that's clearly gonna hit a brick wall. And we've done lots of large-scale distributed systems whether payments or cyberspace or coordination, and the fundamental model that covers all of those is islands of sequential programming in a sea of asynchronous communication. That is what the internet is about, that's what the interchain is about, that's what physics requires you to do if you want a system to scale.

Put this way, it should be obvious: are we going to replace the entire internet with something that has the power of a cell phone? To ask the question is to know the answer: of course not. Even when we do admit blockchain'y systems into our system, we're going to have to have many of them communicating with each other.

Blockchains are just machines that many people/agents run. That's it.

Some of these are encoded with some nice default programming to do some useful things, but all of them can be done in non-blockchain systems because communicating islands of sequential processes is the generalization. You might still want a blockchain, ie you might want multiple parties running one of those machines as a shared abstract machine, but how you configure that blockchain from there might depend on your trust and integrity requirements.

What do I think of blockchains?

I've covered a wide variety of perspectives of "what is a blockchain" in this article.

On the worse end of things are the parts involving hand-wavey confusion about decentralization, mistaken ideas of them being tied to cryptocurrencies, marketing hype, cultural assumptions, and some real, but not intrinsic, cultural problems.

In the middle, I am particularly keen on highlighting the similarity between the term "blockchain" and the term "roguelike", how both of them might boil down to some key ideas or not, but more importantly they're both a rough family of ideas that diverge from one highly influential source (Bitcoin and Rogue respectively). This is also the source of much of the "shouting past each other", because many people are referring to different components that they view as essential or inessential. Many of these pieces may be useful or harmful in isolation, in small amounts, in large amounts, but much of the arguing (and posturing) involves highlighting different things.

On the better end of things is a revelation, that blockchains are just another way of abstracting a computer so that multiple parties can run it. The particular decisions and use cases layered on top of this fundamental design are highly variant.

Having made the waters clear again, we could muddy them. A friend once tried to convince me that all computers are technically blockchains, that blockchains are the generalization of computing, and the case of a solo computer is merely one where a blockchain is run only by one party and no transaction history or old state is kept around. Maybe, but I don't think this is very useful. You can go in either direction, and I think the time travel and Terminal Phase section maybe makes that clear to me, but I'm not so sure how it lands with others I suppose. But a term tends to be useful in terms of what it introduces, and calling everything a blockchain seems to make the term even less useful than it already is. While a blockchain could be one or more parties running a sequential machine as the generalization, I suggest we stick to two or more.

Blockchains are not magic pixie dust, putting something on a blockchain does not make it work better or more decentralized... indeed, what a blockchain really does is converging (or re-centralizing) a machine from a decentralized set of computers. And it always does so with some cost, some set of overhead... but what those costs and overhead are varies depending on what the configuration decisions are. Those decisions should always stem from some careful thinking about what those trust and integrity needs are... one of the more frustrating things about blockchains being a technology of great hype and low understanding is that such care is much less common than it should be.

Having a blockchain, as a convergent machine, can be useful. But how that abstracted convergent machine is arranged can diverge dramatically; if we aren't talking about the same choices, we might shout past each other. Still, it may be an unfair ask to request that those without a deep technical background go into technical specifics, and I recognize that, and in a sense there can be some amount gained from speaking towards broad-sweeping, fuzzy sets and the patterns they seem to be carrying. A gut-sense assertion from a set of loosely observed behaviors can be a useful starting point. But to get at the root of what those gut senses actually map to, we will have to be specific, and we should encourage that specificity where we can (without being rude about it) and help others see those components as well.

But ultimately, as convergent machines, blockchains will not operate alone. I think the system that will hook them all together should be CapTP. But no matter the underlying protocol abstraction, blockchains are just abstract machines on the network.

Having finally disentangled what blockchains are, I think soon I would like to move onto what cryptocurrencies are. Knowing that they are not necessarily tied to blockchains opens us up to considering an ecosystem, even an interoperable and exchangeable one, of varying cryptographically based financial instruments, and the different roles and uses they might play. But that is another post of its own, for whenever I can get to it, I suppose.

ADDENDUM: After writing this post, I had several conversations with several blockchain-oriented people. Each of them roughly seemed to agree that Bitcoin was roughly the prototypical "blockchain", but each of them also seemed to highlight different things they thought were "essential" to what a "blockchain" is: some kinds of consensus algorithms being better than others, that kinds of social arrangements are enabled, whether transferrable assets are encoded on the chain, etc. To start with, I feel like this does confirm some of the premise of this post, that Bitcoin is the starting point, but like Rogue and "roguelikes", "blockchains" are an exploration space stemming from a particular influential technical piece.

However my friend Kate Sills (who also gave me a much better definition for "smart contracts", added above) highlighted something that I hadn't talked about much in my article so far, which I do agree deserves expansion. Kate said: "I do think there is something huge missing from your piece. Bitcoin is amazing because it aligns incentives among actors who otherwise have no goals in common."

I agree that there's something important here, and this definition of "blockchain" maybe does explain why while from a computer science perspective, perhaps signed git trees do resemble blockchains, they don't seem to fit within the realm of what most people are thinking about... while git might be a tool used by several people with aligned incentives, it is not generally itself the layer of incentive-alignment.

The hurt of this moment, hopes for the future

By Christine Lemmer-Webber on Wed 31 March 2021

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope.

I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though I hold clear positions in this moment, there are few roots of feeling and emotion about the moment I feel I haven't steeped in myself at some time. Sometimes I tell this to friends, and they think maybe I am drifting from a mutual position, and this is painful for them. Perhaps they fear this could constitute or signal some kind of betrayal. I don't know what to say: I've been here too long to feel just one thing, even if I can commit to one position.

So I open my journal of feelings, and here I share some of the pages collecting the pain I see around me:

The irony of a movement wanting to be so logical and above feelings being drowned in them.

The feelings of those who found a comfortable and welcoming home in a world of loneliness, and the split between despondence and outrage for that unraveling.

The feelings of those who wanted to join that home too, but did not feel welcome.

The pent up feelings of those unheard for so long, uncorked and flowing.

The weight and shadow of a central person who seems to feel things so strongly but cannot, and does not care to learn to, understand the feelings of those around them.

I flip a few pages ahead. The pages are blank, and I interpret this as new chapters for us to write, together.

I hope we might re-discover the heart of our movement.

I hope we can find a place past the pain of the present, healing to build the future.

I hope we can build a new home, strong enough to serve us and keep us safe, but without the walls, moat, and throne of a fortress.

I hope we can be a movement that lives up to our claims: of justice, of freedom, of human rights, to bring these to everyone, especially those we haven't reached.

Vote for Amy Guy on the W3C TAG (if you can)

By Christine Lemmer-Webber on Mon 21 December 2020

My friend Amy Guy is running for election on the W3C TAG (Technical Architecture Group). The TAG is an unusual group that sets a lot of the direction of the future of standards that you and I use everyday on the web. Read their statement on running, and if you can, ie if you're one of those unusual people labeled as "AC Representative", please consider voting for them. (Due to the nature of the W3C's organizational and funding structure, only paying W3C Members tend to qualify... if you know you're working for an organization that has paying membership to the W3C, find out who the AC rep is and strongly encourage them to vote for Amy.)

So, why vote for Amy? Quite simply, they're running on a platform of putting the needs of users first. Despite all the good intents and ambitions of those who have done founding work in these spaces, this perspective tends to get increasingly pushed to the wayside as engineers are pressured to shift their focus on the needs of their immediate employers and large implementors. I'm not saying that's bad; sometimes this even does help advance the interest of users too, but... well we all know the ways in which it can end up not doing so. And I don't know about you, but the internet and the web have felt an awful lot at times like they've been slipping from those early ideals. Amy's platform shares in a growing zeitgeist (sadly, still in the wispiest of stages) of thinking and reframing from the perspective of user empowerment, privacy, safety, agency, autonomy. Amy's platform reminds me of RFC 8890: The Internet Is For End Users. That's a perspective shift we desperately need right now... for the internet and the web both.

That's all well and good for the philosophical-alignment angle. But what about the "Technical" letter in TAG? Amy's standing there is rock-solid. And I know because I've had the pleasure of working side-by-side with Amy on several standards (including ActivityPub, of which we are co-authors.

Several times I watched with amazement as Amy and I talked about some changes we thought were necessary and Amy just got in the zone, this look of intense hyperfocus (really, someone should record the Amy Spec Editing Zone sometime, it's quite a thing to see), and they refactored huge chunks of the spec to match our discussion. And Amy knows, and deeply cares, about so many aspects of the W3C's organization and structure.

So, if you can vote for, or know how to get your organization to vote for, an AC rep... well, I mean do what you want I guess, but if you want someone who will help... for great justice, vote Amy Guy to the W3C TAG!

Identity is a Katamari, language is a Katamari explosion

By Christine Lemmer-Webber on Wed 09 December 2020

I said something strange this morning:

Identity is a Katamari, language is a continuous reverse engineering effort, and thus language is a quadratic explosion of Katamaris.

This sounds like nonsense probably, but has a lot of thought about it. I have spent a lot of time in the decentralized-identity community and the ocap communities, both of which have spent a lot of time hemming and hawing about "What is identity?", "What is a credential or claim?", "What is authorization?", "Why is it unhygienic for identity to be your authorization system?" (that mailing list post is the most important writing about the nature of computing I've ever written; I hope to have a cleaned up version of the ideas out soon).

But that whole bit about "what is identity, is it different than an identifier really?" etc etc etc...

Well, I've found one good explanation, but it's a bit silly.

Identity is a Katamari

There is a curious, surreal, delightful (and proprietary, sorry) game, Katamari Damacy. It has a silly story, but the interesting thing here is the game mechanic, involving rolling around a ball-like thing that picks up objects and grows bigger and bigger kind of like a snowball. It has to be seen or played to really be understood.

This ball-like thing is called a "Katamari Damacy", or "soul clump", which is extra appropriate for our mental model. As it rolls around, it picks up smaller objects and grows bigger. The ball at the center is much like an identifier. But over time that identifier becomes obscured, it picks up things, which in the game are physical objects, but these metaphorically map to "associations".

Our identity-katamari changes over time. It grows and picks up associations. Sometimes you forget something you've picked up that's in there, it's buried deep (but it's wiggling around in there still and you find out about it during some conversation with your therapist). Over time the katamari picks up enough things that it is obscured. Sometimes there are collisions, you smash it into something and some pieces fly out. Oh well, don't worry about it. They probably weren't meant to be.

Language is reverse engineering

Shout out to my friend Jonathan Rees for saying something that really stuck in my brain (okay actually most things that Rees says stick in my brain):

"Language is a continuous reverse engineering effort, where both sides are trying to figure out what the other side means."

This is true, but its truth is the bane of ontologists and static typists. This doesn't mean that ontologies or static typing are wrong, but that the notion that they're fixed is an illusion... a useful, powerful illusion (with a great set of mathematical tools behind it sometimes that can be used with mathematical proofs... assuming you don't change the context), but an illusion nonetheless. Here are some examples that might fill out what I mean:

  • The classic example, loved by fuzzy typists everywhere: when is a person "bald"? Start out with a person with a "full head" of hair. How many hairs must you remove for that person to be "bald"? What if you start out the opposite way... someone is bald... how many hairs must you add for them to become not-bald?

  • We might want to construct a precise recipe for a mango lassi. Maybe, in fact, we believe we can create a precise typed definition for a mango lassi. But we might soon find ourselves running into trouble. Can a vegan non-dairy milk be used for the Lassi? (Is vegan non-dairy milk actually milk?) Is ice cream acceptable? Is added sugar necessary? Can we use artificial mango-candy powder instead of mangoes? Maybe you can hand-wave away each of these, but here's something much worse: what's a mango? You might think that's obvious, a mango is the fruit of mangifera indica or maybe if you're generous fruit of anything in the mangifera genus. But mangoes evolved and there is some weird state where we had almost-a-mango and in the future we might have some new states which are no-longer-a-mango, but more or less we're throwing darts at exactly where we think those are... evolution doesn't care, evolution just wants to keep reproducing.

  • Meaning changes over time, and how we categorize does too. Once someone was explaining the Web Ontology Language (which got confused somewhere in its acronym ordering and is shortened to OWL (update: it's a Winnie the Pooh update, based on the way the Owl character spells his name... thank you Amy Guy for informing me of the history)). They said that it was great because you could clearly define what is and isn't allowed and terms derived from other terms, and that the simple and classic example is Gender, which is a binary choice of Male or Female. They paused and thought for a moment. "That might not be a good example anymore."

  • Even if you try to define things by their use or properties rather than as an individual concept, this is messy too. A person from two centuries ago would be confused by the metal cube I call a "stove" today, but you could say it does the same job. Nonetheless, if I asked you to "fetch me a stove", you would probably not direct me to a computer processor or a car engine, even though sometimes people fry an egg on both of these.

Multiple constructed languages (Esperanto most famously) have been made by authors that believed that if everyone spoke the same language, we would have world peace. This is a beautiful idea, that conflict comes purely from misunderstandings. I don't think it's true, especially given how many fights I've seen between people speaking the same language. Nonetheless there's truth in that many fights are about a conflict of ideas.

If anyone was going to achieve this though, it would be the Lojban community, which actually does have a language which is syntactically unambiguous, so you no longer have ambiguity such as "time flies like an arrow". Nonetheless, even this world can't escape the problem that some terms just can't be easily pinned down, and the best example is the bear goo debate.

Here's how it works: both of us can unambiguously construct a sentence referring to a "bear". But when it is that bear no longer a bear? If it is struck in the head and is killed, when in that process has it become a decompositional "bear goo" instead? And the answer is: there is no good answer. Nonetheless many participants want there to be a pre-defined bear, they want us to live in a pre-designed universe where "bear" is a clear predicate that can be checked against, because the universe has a clear definition of "bear" for us.

That doesn't exist, because bears evolved. And more importantly, the concept and existence a bear is emergent, cut across many different domains, from evolution to biology to physics to linguistics.

Sorry, we won't achieve perfect communication, not even in Lojban. But we can get a lot better, and set up a system with fewer stumbling blocks for testing ideas against each other, and that is a worthwhile goal.

Nonetheless, if you and I are camping and I shout, "AAH! A bear! RUN!!", you and I probably don't have to stop to debate bear goo. Rees is right that language is a reverse engineering effort, but we tend to do a pretty good job of gaining rough consensus of what the other side means. Likewise, if I ask you, "Where is your stove?", you probably won't lead me to your computer or your car. And if you hand me a "sugar free vegan mango lassi made with artificial mango flavor" I might doubt its cultural authenticity, but if you then referred to the "mango lassi" you had just handed me a moment ago, I wouldn't have any trouble continuing the conversation. Because we're more or less built to contextually construct language contexts.

Language is a quadratic explosion of Katamaris

Language is composed of syntax partly, but the arrangement of symbolic terms mostly. Or that's another way to say that the non-syntactic elements of language are mostly there as identifiers substituted mentally for identity and all the associations therein.

Back to the Katamari metaphor. What "language is a reverse-engineering effort" really means is that each of us are constructing identities for identifiers mentally, rolling up katamaris for each identifier we encounter. But what ends up in our ball will vary depending on our experiences and what paths we take.

Which really means that if each person is rolling up a separate, personal identity-katamari for each identifier in the system, that means that, barring passing through a singularity type event-horizon past which participants can do direct shared memory mapping, this is an O(n^2) problem!

But actually this is not a problem, and is kind of beautiful. It is amazing, given all that, just how good we are at finding shared meaning. But it also means that we should be aware of what this means topologically, and that each participant in the system will have a different set of experiences and understanding for each identity-assertion made.

Thank you to Morgan Lemmer-Webber, Stephen Webber, Corbin Simpson, Baldur Jóhannsson, Joey Hess, Sam Smith, Lee Spector, and Jonathan Rees for contributing thoughts that lead to this post (if you feel like you don't belong here, do belong here, or are wondering how the heck you got here, feel free to contact me). Which is not to say that everyone, from their respective positions, have agreement here; I know several disagree strongly with me on some points I've made. But everyone did help contribute to reverse-engineering their positions against mine to help come to some level of shared understanding, and the giant pile of katamaris that is this blogpost.

Spritely website launches, plus APConf video(s)!

By Christine Lemmer-Webber on Wed 30 September 2020

Note: This originally appeared as a post on my Patreon account... thanks to all who have donated to support my work!

Hello, hello! Spritely's website has finally launched! Whew... it's been a lot of work to get it to this state! Plus check out our new logo:

Spritely logo

Not bad, eh? Also with plenty of cute characters on the Spritely site (thank you to David Revoy for taking my loose character sketches and making them into such beautiful paintings!)

But those cute characters are there for a reason! Spritely is quite ambitious and has quite a few subprojects. Here's a video that explains how they all fit together. Hopefully that makes things more clear!

Actually that video is from ActivityPub Conference 2020, the talks of which have now all have their videos live! I also moderated the intro keynote panel about ActivityPub authors/editors. Plus there's an easter egg, the ActivityPub Conference Opening Song! :)

But I can't take credit for APConf 2020... organization and support are thanks to Morgan Lemmer-Webber, Sebastian Lasse, and FOSSHost for hosting the website and BigBlueButton instance and conf.tube for generously hosting all the videos. There's a panel about the organization of APConf you can watch if you're interested in more of that! (And of course, all the other great videos too!)

So... what about that week I was going to work on Terminal Phase? Well... I'm still planning on doing it but admittedly it hasn't happened yet. All of the above took more time than expected. However, today I am working on my talk about Spritely Goblins for RacketCon, and as it turns out, extending Terminal Phase is a big part of that talk. But I'll announce more soon when the Terminal Phase stuff happens.

Onwards and upwards!