Why is it hard to move from one machine to another? An analysis. [x-post from Userops]

By Christine Lemmer-Webber on Wed 08 April 2015

NOTE: This is a shameless cross-post of something I originally sent to the userops list, where we discuss deployment things.


Hello all,

For a while I've been considering, why is it so harder for me to migrate from server to server than it is for me to migrate from desktop to desktop? For years, ever since I discovered rsync, migrating between machines has not been hard. I simply rsync my home directory over to the new machine (or maybe even just keep the old /home/ directory's partition where it is!) and bam, I am done. Backing this up is easy; it's just another rsync away. (I use dirvish as a simple wrapper around rsync so it can manage incremental backups.)

If I set up a new machine, it is no worry. Even if my current machine dies, it is mostly no worry. Rsync back my home directory, and done. I will spend a week or so discovering that certain programs I rely on are not there, and I'll install them one by one. In a way it's refreshing: I can install the programs I need, and the old cruft is gone!

This is not true for servers. At the back of my mind I realized this, but until the end of Stefano Zacchiroli's excellent LibrePlanet talk when I posed a question surrounding this situation, I hadn't totally congealed in my head: why is it so much harder for me to move from server to server? Assume I even have the old server around and I want to move. It isn't easy!

So here are some thoughts that come out of this:

  • For my user on my workstation, configuration and data are in the same place: /home/ (including lots of little dotfiles for the configs, and the rest is mostly data). Sure, there's some configuration stuff in /etc/ and data in /var/ but it mostly doesn't really matter, and copying that between machines is not hard.
  • Similarly, for my user on workstation experience, it is very little stress if I set up a machine and am missing some common packages. I can just install them again as I find them missing.
  • Neither of these are true for my server! In addition to caring about /home/, and even more importantly, I have to worry about configuration in /etc/ and data in /var/. These are both pains in the butt for me for different reasons.
  • Lots of stuff in /etc/ is configuration that interacts with the rest of the system in specific ways. I could rsync it to a new machine, but I feel like that's just blindly copying over stuff where I really need to know how it works and how it was set up with the rest of the machine in the first place.
  • This is compounded by the fact that people rarely set up one machine these days; usually they have to set up several machines for several users. Remembering how all that stuff worked is hard. The only solution seems to be to have some sort of reproducible configuration system. Hence the rise of salt, ansible, etc. But these aren't really "userops" systems, they're "devops"... developer focused. Not only do you need to know how they work, you need to know how the rest of the system works. And it's not easy to share that knowledge.
  • /var/ is another matter. Theoretically, most of my program data goes there (unless, of course, it went to /srv/, god help us). But I can't just rsync it! There are some processes that are very persnickety about the stuff there. I have to dump my databases and etc before I can move them or back up. Nothing sets up an automatic cronjob for me on these, I have to know to dump postgres. Hopefully I set up a cronjob!
  • While I as a workstation user don't stress too much if I'm missing some packages (just install them as I go), that is NOT true of my servers. If my mail servers aren't running, if jabber isn't on, (if SSH isn't running!!!), there are other servers expecting to communicate with my machine, and if I don't set them up, I miss out.
  • Not only this, assuming I have moved between servers correctly, even once I have set up my machine and it has become a perfectly okay running special snowflake, there are certain routine tasks that require a lot of manual intervention, and I have only picked up the right steps by knowing the right friends, having run across the right tutorials which hopefully have shown me the right setup, etc. SSL configuration, I'm looking at you; the only savior that I have is that I have written myself my own little orgmode notes on what to do the next time my certs expire.
  • My servers do become special snowflakes, and that is very stressful to me. I will, in the future, need to set up one more server, and remembering what I did in the past will be very hard.
  • Assuming I use all the mainstream tools, not talking about "upcoming" ones, a better configuration management solution is probably the answer, right? That's a lot to ask users though: it's not a solution to existing deployment, because it doesn't remove the need to know about all the layers underneath, it just adds a new layer to understand.

Those are all headaches, and they are not the only headaches. But here are some thoughts on things that can help:

  • If I recognize which parts of my system are "immutable" and which parts are "mutable", it's easier to frame how my system works.

    • /var/ is mutable, it's data. There's no making this "reproducible" really: it needs to be backed up and moved around.

    • My packages and system are immutable, or mostly should be. Even if not using a perfectly immutable system like guix/nix, it's helpful to act like this part of the system is pseudo-immutable, and simply derived from some listing of things I said I wanted installed. (Nix/Guix probably do this the most nicely though.)

    • /etc/ is similarly "immutable but derived" in the best case. I should be able to give the same system configuration inputs and always get the same system of packages and configuration files.

  • I like Guix/Nix, but my usage of Debian and Fedora and friends is not going away anytime soon. Nonetheless, configuration management systems like puppet/ansible/salt help give the illusion of an immutable system derived from a set of inputs, even though they are working within a mutable one.

  • Language packaging for deployment needs to die. Yes, I say this as a project that advocates that very route. We're doing it wrong, and I want to change it. (Language packaging for development though: that's great!)

  • Asking people to use systems like ansible/salt/puppet is asking users too much. You're just asking them to learn one more layer on top of knowing how the whole system works. Sharing common code is mostly copy and paste. There are some layers built on top of here to mitigate this but afaict they aren't really good, not good enough. (I am working on something to solve this...)

  • Pre-built containers are not the solution. Sorry container people! Containers can be really useful but only if they are built in some reproducible way. But very few people using Docker and etc seem to be doing this. But here's another thing: Docker and friends contain their own deployment domain specific languages, which is dumb. If a reproducible configuration system is good enough, it should be good enough for a VM or a container or a vanilla server or a desktop. So maybe we can use containers as lightweight and even sandboxed VMs, but we shouldn't be installing prebuilt containers on our servers alone as a system.

    Otherwise else you're running 80 heavy and expensive Docker images that slowly go out of date... now you're not maintaining 1 distribution install, you're maintaining 81 of them. Yikes! Good luck with the next Shellshock!

  • Before Asheesh jumps in here: yes I will say that Sandstorm is taking maybe the best route as in terms of a system that uses containers heavily (and unlike Docker, they seem actually sandboxed) in that it seems to have a separation between mutable parts and immutable parts: the container is more or less an immutable machine from what I can tell that has /var/ mounted into it, which is a pretty good route.

    In this sense I think Sandstorm has a good picture of things. There are other things that I am still very unsure about, and Asheesh knows because I have expressed them to him (I sure hope that iframe thing goes away, and that daemons like Celery can run, and etc!) but at least in this sense, Sandstorm's container story is more sane.

So there are some reflections in case you are planning on debugging why these things are hard.

-- Chris

PS: If you haven't gotten the sense, the direction I'm thinking of is more along the lines of Guix becoming our Glorious Future (TM) assuming something like GuixOps can happen (go Dave Thompson, go Guix crew!) and a web UI can be built on top of it with some sort of common recipe system.

But I don't think our imperative systems like Debian are going away anytime soon; I certainly don't intend to move all my stuff over to Guix at this time. For that reason, I think there needs to be another program to fit the middle ground: something like salt/ansible/puppet, but with less insane one-off domain specific languages, with a sharable recipe system, and scalable both from developer-oriented scripts but also having a user-friendly web interface. I've begun working on this tool, and it's called Opstimal. Expect to hear more about it soon.

Interviewed on Ryno the Bearded

By Christine Lemmer-Webber on Tue 07 April 2015

I was interviewed on Ryno the Bearded in an episode with the curious title "My Origin Story". I'm not sure whose that refers to, maybe both of us, because we both talked about our backstories. (Though, I think if I was going to lay out my "free software origin story", it would probably include some other things... but maybe it would get to be fairly rambly. I've thought about trying to write up what that is before, but I guess there were a lot of "moments" for my free software origins, not any one moment like "I was bitten by a libre radioactive spider".)

I really enjoyed doing this one, and maybe you'll enjoy listening to it. It's kind of rambly and conversational and we came in with very little as in terms of questions, but I think I tend to do fairly well in that format.

I did cut off the end of the interview by saying I had to go to the bathroom though. Not really the most dignified of exits on my part. Oh well.

Fever Dream: cryptocurrency

By Christine Lemmer-Webber on Sun 29 March 2015

I've been sick the last two days, and for some reason, I've been dreaming in lisp. It's not the first time I've dreamt in code, but all times have been in lisp. Maybe lisp is not as readable as many would like, but it seems dreamable.

Just got up from another nap, another lisp-based fever dream. This one about a cryptocurrencty on an actor model system, built for a MUD/AR type environment... no blockchain required (note: may be inspired by reading Rainbows End between naps):

  • Various actors represent "reserves", like the Federal Reserve can print their own money, as much as they like, but there may be various community enforcements of this
  • You might have different central banks / reserves on different servers
  • The "value" or exchange rate determined by the market, like international currency.
  • Currency is non-divisible (you have 100 rupees, but not .1 rupees)
  • Bank has a private key, so does each actor.
  • Basically more or less passing along capabilities (I think???) but maybe even the central bank signing whoever has "posession". The bank does not know who things are transferred to, but does verify that the transferring owner has the right capability currently belonging to that unit's ID, and does issue a new capability to the new owner.

Does this make sense? I'm too sick to know for now. But transactions flying everywhere, wrapped in parentheses, in my fevered mind.

If I have more crazy lisp fever dreams today I will record them here. No guarantee of sanity.

LibrePlanet and W3C Social Working Group 2015

By Christine Lemmer-Webber on Sun 29 March 2015

Great trip, spent 2 weeks doing the following:

  • Attending W3C Social Working Group. Good things have happened; felt much more clear by the second day than by the first day. Lessons learned in that raising issues (normally I'm reasonably quiet, a certain amount of imposter syndrome while participating in the group) was helpful, some people said they were thinking about the same things. Anyway, much work to do ahead.
  • Hanging out at Deb Nicholson's place with friends
  • Gave a talk at Libreplanet which I'm happy with
  • Had an awesome night of vegan salted caramel root beer floats (the best thing ever it turns out) and Hanabi with Aeva Palecek, Amy Guy, and Jessica Tallon; all friends of federation, but it was nice getting to decompress without worrying about that stuff for a bit. Way better than a trip to the bar!
  • So many great people at Libreplanet. I'm kind of sad that I didn't get to spend a lot of time with everyone, but the up side is that I got to spend a lot of time with some particularly great people, and the main problem was there was just too many good people at once. A good problem.
  • Mako and Karen both gave good keynotes, and I liked a lot of talks
  • Lots of Veggie Galaxy, eaten
  • Then took a trip to hang out with @Bassam Kurdali, Fateh, Libby Reinish, Tristen, at their respective places. It was nice to do this with @Tsyesika / Jessica Tallon joining me.
  • Much of the week spent from the "Nerdodrome", Hampshire College's animation studio, where Bassam works. I spent most of the week working on a deployment system I'm specc'ing out. If it turns out to be something I use, I'll update with more info.
  • Got to spend some time exploring Western Mass, which was great.

The trip was really worth it. I spent a lot of time with people I care about, and it felt very productive.

I'm not planning another conference for the remainder of 2015 though. Amongst other things, my back and wrists are killing me right now, and doing this morning's stretches was a painful wake-up. And I have a lot of difficult personal things in the year ahead. Oh, and let's not forget about how much work there is to do on MediaGoblin that I haven't gotten to.

But! This trip really helped set some directions for me for the year ahead, and I'm really grateful for that.

Looking forward to LibrePlanet 2016!

Fauxnads

By Christine Lemmer-Webber on Sat 14 March 2015

So off and on, I've been trying to understand monads. It turns out I have a use case: making web applications in the style I'd like but have them be asynchronous leads to trouble because you need a non-global-variable way of passing along context. I've tried thinking of some solutions, but a friend of mine convinced me that monads, if I could wrap my head around them, would solve the problem for me.

With that in mind, I did some reading... there are plenty of resources out there, some of them with nice pictures, and at one point I tried to understand them by just reading about them and not writing any code relevant to them. This lead to me guessing how they might work by trying to contextualize them to the problems I wanted to solve. So along that path, I had a misunderstanding, a mistaken vision of how monads might work, but while I was wrong, it turned out that this vision of ~monads is kind of fun and had some interesting properties, so I decided to code it up into a paradigm I'm jokingly calling "fauxnads".

Brace yourself, we're about to get into code, pseudo- and real, and it's going to be in Guile. (All code in this blogpost, GPLv3+ or LGPLv3+, as published by the FSF!)

So here's the rundown of fauxnads:

  • They still pass along a context, and under the hood, they do pass it in still as the first argument to a function!
  • The context gets passed up (and optionally, down... more on that in a few) in some kind of associative array... but we don't want to accidentally change the context that we passed to other functions already, so we'll use something immutable for that.
  • The user doesn't really access the context directly. They specify what variables they want out of it, and the fauxnad macro extracts it for them.
  • Fauxnads can add properties to the context that they'll call subroutines with so that subsequent fauxnad calls can have access to those.
  • Calling child fauxnads happens via invoking a function (=>) exposed to the rest of the fauxnad via some lexical scope hacks.

So when sketching this out, I tried to boil down the idea to a quick demo:

;; fleshed out version of what a fauxnad should approx expand to
(define (context-test1 context arg1 arg2)
  (letrec ((new-context context)
           ;; Define function for manipulating the context
           (context-assoc
            (lambda (key value)
              (set! new-context
                    (vhash-cons key value new-context))))
           ;; a kind of apply function
           (=>
            (lambda (func . args)
              (apply func (cons new-context args))))) ;; should be gensym'ed

    ;; This part would be filled in by the macro.
    ;; The user would set which variables they want from the context
    ;; as well as possibly the default values
    (let ((a (or (vhash-assoc 'a context) "default-value for a"))
          (b (or (vhash-assoc 'b context) "default-value for b"))
          (c (or (vhash-assoc 'c context) "default-value for c")))
      (values
       (begin
         (context-assoc 'lol "cats")
         (=> context-test2 "sup cat")
         (context-assoc 'a "new a")
         (=> context-test2 "sup cat")
         (format #t "a is ~s, b is ~s, and c is ~s\n"
                 a b c)
         (string-append arg1 " " arg2))
       new-context))))

;; intentionally simpler, not a "real fauxnad", to demo
;; the fauxnad concept at its most minimal
(define (context-test2 context arg1)
  (begin
    (format #t "Got ~s from the context, and ~s from args, mwahaha!\n"
            (vhash-assoc 'a context)
            arg1))
  (values
   "yeahhh"
   context))

Then calling in the console:

scheme@(guile-user)> (context-test1 (alist->vhash '((a . 1) (b . 2))) "passed arg1" "passed arg2")
Got (a . 1) from the context, and "sup cat" from args, mwahaha!
Got (a . "new a") from the context, and "sup cat" from args, mwahaha!
a is (a . 1), b is (b . 2), and c is "default-value for c"
$79 = "passed arg1 passed arg2"
$80 = #<vhash 2205920 4 pairs>

Okay, whaaa? Let's look at the requirement again. We'll be passing in a function to the start of the function, and then having some other args. We'll then pass that along to subsequent functions. So more or less, we know that looks like this. (I know, not the most useful or pretty or functional code, but it's just a demo of the problem!)

(define (main-request-handler context request)
  ;; print out hello world in the right language
  (display (translate-this (context-get context 'lang) "Hello, world"))
  (newline)

  ;; now call another function
  (next-function new-context (smorgify arg1)))

(define (next-function context what-to-smorgify)
  (write-to-db
   ;; Lots of functions need access to the
   ;; current database connection, so we keep it in the context...
   (context-get context 'db-conn)
   (smorgify-this what-to-smorgify)))

But wouldn't it be cool if we didn't have to pass around the context? And what if we just said, "we want this and this from the context", then forgot about the rest of the context? We'd never need to call context-get again! It would also be cool to have a way to set things in the context for subsequent calls. Ooh, and if we coud avoid having to type "context" over and over again when passing it into functions, that would also be awesome.

So how about a syntax like this:

(define-fauxnad (our-special-function arg1)
  ((lang "en"))  ;; we want the language variable, but if not set,
                 ;; default to "en" or english
  ;; (body is below:)
  ;; print out hello world in the right language
  (display (translate-this lang "Hello, world"))
  (newline)
  ;; now call another function
  (=> next-function (smorgify arg1)))

We also know we want to use some sort of immutable hashmap. Guile provides vhashes which provide "typically constant-time" data access, and while there are some caveats (a new vhash returned by appending a key/value pair where that key already existed in the vhash will just keep the old pair around... but on our pseudo-stack that shouldn't happen very often, so vhashes should be fine), they work for our purposes.

Okay, cool. So what would that look like, expanded? Something along the lines of:

(define (our-special-function context arg1)
  (letrec ((new-context context)
           ;; Define function for manipulating the context
           (context-assoc
            (lambda (key value)
              (set! new-context
                    (vhash-cons key value new-context))))
           ;; a kind of apply function
           (=>
            (lambda (func . args)
              (apply func (cons new-context args))))) ;; should be gensym'ed

    ;; This part would be filled in by the macro.
    ;; The user would set which variables they want from the context
    ;; as well as possibly the default values
    (let ((lang (or (vhash-assoc 'lang context) "en")))
      (values
       (begin
         ;; print out hello world in the right language
         (display (translate-this (context-get context 'lang) "Hello, world"))
         (newline)

         ;; now call another function
         (next-function new-context (smorgify arg1)))
       new-context))))

As a bonus, we've taken advantage of Guile's multi-value return support, so any parent function which cares can get back the new context we defined for subsequent calls, in case we want to merge contexts or something. But functions not aware of this will simply ignore the second returned parameter. (I'm not sure this is a useful feature or not, but it's nice that Guile makes it easy to implement!)

That's clearly quite a complicated thing to implement manually though... so it's time to write some code to write code. That's right, it's macro time! Guile has some pretty cool hygienic macro support that uses "syntax tranformation"... a bit nicer than common lisp's defmacro, but also less low-level. Anwyay, if you're not familiar with that syntax, trust me that this does the right thing I guess:

(define-syntax define-fauxnad
  (lambda (x)
    (syntax-case x ()
      ((_ (func-name . args)
          ((context-key context-default) ...)
          body ...)
       (with-syntax ((=> (datum->syntax x '=>))
                     (context-assoc (datum->syntax x 'context-assoc)))
         #'(define (func-name context . args)
             (letrec ((new-context context)
                      ;; Define function for manipulating the context
                      (context-assoc
                       (lambda (key value)
                         (set! new-context
                               (vhash-cons key value new-context))))
                      ;; a kind of apply function
                      (=>
                       (lambda (func . func-args)
                         (apply func (cons new-context func-args))))) ;; should be gensym'ed

               ;; This part would be filled in by the macro.
               ;; The user would set which variables they want from the context
               ;; as well as possibly the default values
               (let ((context-key (or (vhash-assoc (quote context-key) context)
                                      context-default))
                     ...)
                 (values
                  (begin
                    body ...)
                  new-context)))))))))

Nice, now writing our fauxnads is dead-simple:

(define-fauxnad (our-special-function arg1)
  ((lang "en"))  ;; we want the language variable, but if not set,
  ;; default to "en" or english
  ;; (body is below:)
  ;; print out hello world in the right language
  (display (translate-this lang "Hello, world"))
  (newline)
  ;; now call another function
  (=> next-function (smorgify arg1)))

(define-fauxnad (next-function context what-to-smorgify)
  ((db-conn #nil))
  (write-to-db
   ;; Lots of functions need access to the
   ;; current database connection, so we keep it in the context...
   (context-get context 'db-conn)
   (smorgify-this what-to-smorgify)))

Okay, again, my demos don't make this look very appealing I suppose. We can now transform the original demos I sketched up into fauxnads though:

(define-fauxnad (context-test1 arg1 arg2)
  ((a "default value for a")
   (b "default value for b")
   (c "default value for c"))
  (context-assoc 'lol "cats")
  (=> context-test2 "sup cat")
  (context-assoc 'a "new a")
  (=> context-test2 "sup cat")
  (format #t "a is ~s, b is ~s, and c is ~s\n"
          a b c)
  (string-append arg1 " " arg2))

(define-fauxnad (context-test2 arg1)
  ((a #nil))
  (format #t "Got ~s from the context, and ~s from args, mwahaha!\n"
          a arg1)
  "yeahhh")

And calling it:

scheme@(guile-user)> (context-test1 (alist->vhash '((a . 1) (b . 2))) "passed arg1" "passed arg2")
Got (a . 1) from the context, and "sup cat" from args, mwahaha!
Got (a . "new a") from the context, and "sup cat" from args, mwahaha!
a is (a . 1), b is (b . 2), and c is "default value for c"
$81 = "passed arg1 passed arg2"
$82 = #<vhash 2090ae0 4 pairs>

Okay, so what's the point? I doubt this blogpost really would sell anyone on fauxnads, and maybe why would you use fauxnads when you can use real monads? But here's some interesting properties:

  • fauxnads are still super simple functions that you can call manually: just pass in the context (a vlist) as the first parameter.
  • the "binding" function for calling sub-fauxnads is sugar, but hidden (and otherwise inaccessible, because the function hygeine keeps you from accessing "new-context") from the user.
  • I still like that you can get back the new context via multiple value return, but totally ignore it if you don't care about it.
  • I understand how they work.

And on that last note, I still don't understand monads, but I feel like I'm getting closer to it. It was fun to document, and put to code, a misunderstanding though!

Why XUDD is stuck (or: why Python needs better immutable structures)

By Christine Lemmer-Webber on Mon 23 February 2015

Update: Well, you post a thing, and sometimes that's enough for people to come and help you realize how wrong you are. Which is good! There are a number of ways forward (some obvious in retrospect). For one, pyrsistent does exist and looks nice and... well it's even actively developed. But even aside from that, there are several clean solutions: wrapper objects which "lock" the child object with getters but no setters, or even just using alist style tuples of tuples for a fake hashmap. Options are indeed abound.

And the exception thing? Well, that wasn't listed as a permanent problem below, but the solution is even easier! It would be simple to have a MessageError("some_identifier") which has a minimalist identifier which can be passed across the wire, and the directive of this error can be a special case.

Anyway, you can read the original post below. But it's good to be wrong. XUDD no longer has reason to be dead. Long live XUDD!

tl;dr: Kind of along post, but basically the lack of good functional data structures in Python has kinda killed the project.

One of my favorite projects ever has been to work on XUDD, an asynchronous actor model system for Python. Originally born out of a quest to build a MUD (hence the name), but eventually became the focus of being an actor model itself, it was a really interesting exploration for me.

There's a lot of things I like about the actor model... for one thing, functional programming is all the rage, right? But not all systems are easy to express in a purely functional style, and done right an actor model can be fairly object oriented'y, but done right, you can have your mutable cake and eat it too, safely! Your actor can mutate some of its own variables, but when it communicates across the wire, since it's a "shared nothing" environment you can get even better scale-to-the-moon type functionality than in many functional language systems: it's trivial to write code where actors communicate across multiple processes or machines in just the same way as if they were all on the same machine in the same process and thread.

I put a lot of thought into XUDD, and I've looked into some of the alternatives like Pykka, and I still think XUDD has some ideas that kicks butt on other systems. I still think the use of coroutines feels very clean and easy to read, the "hive" model is pretty nice, and the way it's built on top of the awesome asyncio system for Python are all things I'm happy with.

So, every once in a while I get an email from someone who reads the XUDD documentation and also gets excited and asks me what's going on.

The sad reality is: I'm stuck. I'm stuck on two fronts, and one I can figure my way out of, but the other one doesn't seem easy to deal with in Python as-is.

The first issue is of error propagation. This is solvable, but when an exception is raised, it would be nice to propagate this back to the original actor. There are some side issues I'm not sure about: in an inter-hive-communication (read: multiple machines or processes) type scenario, should we use standard exceptions and try to import and reproduce the same exception that was raised elsewhere? That seems like it could be... gnarly to do. Raising the error inside the original routine is also a bit tricky, but not too hard; python's coroutines can support it and I just need to think about it. So exceptions are annoying but solvable.

But the other issue... I'm not sure what to do about it. Basically, it's an issue of a lack of immutable types, or we might even say "purely functional datastructures" that are robust enough to continue with. Why does that matter? Messages sent between actors shouldn't have any mutable data. It's fine and well and even a nice feature for actors to be able to have mutable data within themselves, and actually even provide a nice way to pull of things that are just damned hard in purely functional systems, but between actors, mutable data is a no-no.

It's easy to see why: say we have a function that has a list in it, say of a number of children in my classroom, and I send this list over to an actor that controls some sort of database, right? I'm doing things in a nice, fancy coroutine'ish type way, which means my function can just suspend mid-execution while it waits for that database actor to generate some sort of reply and send it back to me. What happens if that other function pops one of the items off the list, or appends to it, or in some way actually mutates the list? Now, when my function continues, it'll be operating on a differently formed list than the one it thinks it has. I might have a reference to the third item in the list, but it turns out that there isn't a third item in the list anymore, because the other function popped it off. This can introduce all sorts of subtle bugs, and it's bumming me out that I don't have a good solution to them.

Now, there's a way around this: you can serialize each and every message completely before sending it to another actor. And of course, if actors are on different threads, processes, or even entirely different machines, of course we'd do this. But XUDD has the concept of actors being on the same hive, and there are a number of reasons for this, but one of them is that for local message passing, packing and unpacking data in some sort of serialized format for every call slows things down by a lot. When I originally began designing XUDD, the plan was for games that might need to shard out to a number of different servers but have players that can traverse different parts of the system and communicate with other shards (without knowing or the code mostly knowing that it's communicating with other actors that are technically remote). I want to be able to pass many messages at once to actors that are on the same hive, while still having a totally safe time of doing so. But there's no way to do so without a nice set of immutable / "purely functional" types, and Python just doesn't have this right now. None of the third party libraries I've found seem well maintained (am I missing something?), and the standard library is fairly deficient here. Why? I'm not really sure. I guess Python's history is just synchronously imperative enough that it just hasn't mattered.

I'd like to continue research into the actor model... I have some projects I'd like to work on where the actor model seems perfectly tuned to those tasks. What to do?

Well, I'm not really sure... I guess I could just serialize everything all the time, but it's kind of a bummer to me that so many cycles would be wasted for local computation. Maybe it's a dumb reason to feel exhausted with things, but that's the state of it. I'm not enough of a datastructure wizard to implement these things myself, but they exist. I've thought about giving up on XUDD being a Python project and to move over to something else... Guile has a cooperative REPL which would be great for debugging, and I really like the community there, so maybe that would be a nice place to go. Not really sure there's anything else I'm interested enough in at the moment. I think I'd miss Python. Or maybe I'm over-thinking everything in the first place? (Wouldn't be the first time.)

Maybe there's another way out. If you have any ideas, contact me.

Thoughts on LambdaMOO from a newbie

By Christine Lemmer-Webber on Sun 22 February 2015

While working on MediaGoblin's upcoming release, and (dare I say) feeling a bit bored and anxious and frustrated about the whole process, I decided that maybe it was time to look into some sort of entertainment. But for the most part, I feel like I've played all the interesting free software games out there. I've mostly beaten even the games of my old proprietary console using past. What to do?

I wondered what the state of good ol fashioned MUD systems are. I've had dreams for years of building such a system from scratch (hence my interest in the actor model). Answer: they feel like they're in the same place they were around 2000. But maybe this isn't bad? I tried playing some dungeon crawling'y MUDs but found myself a bit bored. But having remembered that Rob Myers has talked plenty about LambdaMOO, so that seems like good enough reason to connect and look around. (Note, connecting is not hard! I am using Mudlet at the moment.)

So, in-between actual coding and fretting about this and that, I've been getting in some downtime, poking around the LambdaMOO environment. And at first I was fairly unimpressed... this thing is what, 20 rooms big? What can I do here? But I've been finding some interesting things:

  • That LambdaMOO is both the software and one particular server is fairly confusing, but no matter: both are of interesting construction. They're both also very old, 25 years old at this point (egads), but particularly as to the world, that leads to much depth.
  • The main rooms of the house (see [help geography]{.title-ref}) are not as interesting as the places they lead to. You can enter a model railroad and actually walk around that imaginary city and end up in some interesting places, reading graffiti off of nightclub walls, etc. Find an old teletype and type on it, or play one of the functioning (!) board games sitting around. A dusty hallway leads to a full hack and slash style game engine, coded live inside the system itself.
  • Most of the world is not really a dungeon crawler, though there is one hidden in it. Most of walking around LambdaMOO is exploring the many curiosities that users have provided there.
  • Speaking of coding live inside the system, what's really interesting about LambdaMOO is that you're really manipulating a big database, and that many features are live coded from within the game itself. Everyone can experiment with adding rooms, though not everyone can program, though you may be able to get help from a certain buffer in the smoking room, if you can find it. (I just got access... I'll have to try things.)
  • LambdaMOO is best known for a famous essay, but I think that essay is maybe not as interesting as it was in the 90s when online experiences were becoming a new thing? Still, the effect of the essay and the event it chronicles is still clear, including the way the world is introduced to you.
  • Interesting to think of LambdaMOO in case of copyright and licensing, of which... well there seems to be no clear answer what is going on, but sharing seems not impeded by it.
  • Being from the early 90s, it's amazing how much is here to explore, and how everything still lives. Since most of the world is pretty "low burn", that's nice for me, because I can just type in a command to look around somewhere every once in a rare while and leave LambdaMOO open in the background.
  • I'm enjoying exploring the world, but I sense that LambdaMOO had a heyday, and while there are users connected, the sheer size of it vs the volume of connected users gives me the sense that today is not it. There's a certain amount of feeling of exploring a secret city, which is wonderful and sad. (It would be nice to have some friends who were interested in exploring with me, maybe I should cajole some into joining me.) It's an interesting piece of internet history, still here to explore, but since it's in a "living" database, what will happen when it shuts down? There won't be any Internet Archive to preserve the LambdaMOO memories, I fear. But maybe nothing is forever, anyway.

I have more, maybe crazier, thoughts on MUDs and maybe a way they could be more useful in this modern world. May I'll explore soon, but I think that the popularity of MUDs has diminished with World of Warcraft and Skyrim and the like, but maybe they have a place still if we can do some new things with them? And those are fairly hack-and-slash type systems, maybe MOO/MUD/MUSH systems have more to offer than popular MMO systems provide? Roguelikes seem more popular than ever, if anything... things can come back... maybe new interfaces are needed?...

In the meanwhile, if you drop by LambdaMOO, send me a message! I'm "paroneayea" there.

SLIB's topsort for standard Guile

By Christine Lemmer-Webber on Wed 18 February 2015

I found that I needed a topological sort algorithm in Guile, and it turns out that one already exists anyway in SLIB. Unfortunately, it seems that SLIB still doesn't work in Debian, and do I really want to depend on all that just for a topsort algorithm?

So here's SLIB's topsort algorithm, but with the hashmap switched over from slib's hashmap, which maybe makes it faster anyway. It was a pretty trivial change:

;;; "tsort.scm" Topological sort
;;; Copyright (C) 1995 Mikael Djurfeldt
;;;               2015 Chris Lemmer-Webber
;;; This code is in the public domain.

;;; The algorithm is inspired by Cormen, Leiserson and Rivest (1990)
;;; "Introduction to Algorithms", chapter 23

;;@code{(require 'topological-sort)} or @code{(require 'tsort)}
;;@ftindex topological-sort
;;@ftindex tsort

;;@noindent
;;The algorithm is inspired by Cormen, Leiserson and Rivest (1990)
;;@cite{Introduction to Algorithms}, chapter 23.

;;@body
;;@defunx topological-sort dag pred
;;where
;;@table @var
;;@item dag
;;is a list of sublists.  The car of each sublist is a vertex.  The cdr is
;;the adjacency list of that vertex, i.e. a list of all vertices to which
;;there exists an edge from the car vertex.
;;@item pred
;;is one of @code{eq?}, @code{eqv?}, @code{equal?}, @code{=},
;;@code{char=?}, @code{char-ci=?}, @code{string=?}, or @code{string-ci=?}.
;;@end table
;;
;;Sort the directed acyclic graph @1 so that for every edge from
;;vertex @var{u} to @var{v}, @var{u} will come before @var{v} in the
;;resulting list of vertices.
;;
;;Time complexity: O (|V| + |E|)
;;
;;Example (from Cormen):
;;@quotation
;;Prof. Bumstead topologically sorts his clothing when getting
;;dressed.  The first argument to @0 describes which
;;garments he needs to put on before others.  (For example,
;;Prof Bumstead needs to put on his shirt before he puts on his
;;tie or his belt.)  @0 gives the correct order of dressing:
;;@end quotation
;;
;;@example
;;(require 'tsort)
;;@ftindex tsort
;;(tsort '((shirt tie belt)
;;         (tie jacket)
;;         (belt jacket)
;;         (watch)
;;         (pants shoes belt)
;;         (undershorts pants shoes)
;;         (socks shoes))
;;       eq?)
;;@result{}
;;(socks undershorts pants shoes watch shirt belt tie jacket)
;;@end example

(define (tsort dag pred)
  (if (null? dag)
      '()
      (let* ((adj-table (make-hash-table)) ; SLIB was smarter about
                                           ; setting the length...
             (sorted '()))
        (letrec ((visit
                  (lambda (u adj-list)
                    ;; Color vertex u
                    (hashq-set! adj-table u 'colored)
                    ;; Visit uncolored vertices which u connects to
                    (for-each (lambda (v)
                                (let ((val (hashq-ref adj-table v)))
                                  (if (not (eq? val 'colored))
                                      (visit v (or val '())))))
                              adj-list)
                    ;; Since all vertices downstream u are visited
                    ;; by now, we can safely put u on the output list
                    (set! sorted (cons u sorted)))))
          ;; Hash adjacency lists
          (for-each (lambda (def)
                      (hashq-set! adj-table (car def) (cdr def)))
                    (cdr dag))
          ;; Visit vertices
          (visit (caar dag) (cdar dag))
          (for-each (lambda (def)
                      (let ((val (hashq-ref adj-table (car def))))
                        (if (not (eq? val 'colored))
                            (visit (car def) (cdr def)))))
                    (cdr dag)))
        sorted)))

(define topological-sort tsort)

Well there you go, it's in the public domain, so I guess you can just drop it straight into your project. Maybe Guile ought to have this included in its standard library? Lots of projects seem to reinvent it... maybe I should submit a patch...

Guix package manager without "make install"

By Christine Lemmer-Webber on Sun 08 February 2015

I've been interested in Guix, the GNU functional package manager based on Guile and drawing much from Nix (though, I think, as a complete system, Guix is much more interesting... maybe a topic of its own post). After a great FOSDEM 2015, where amongst other things I got to speak to and hear from the super nice Guix developers (and even communicated much about the challenges with deploying libre webapps, a topic of one of the presentations I gave), I decided to finally install the thing.

Trouble is, if I'm mostly interested in playing with the package manager (there's now a complete Guix distro called the "Guix System Distribution", but it's a bit too alpha yet, and I'd like to play around in the system first anyhow), and I run Debian... but Guix isn't yet packaged for Debian (though I would like it to be!) But I hate... hate hate hate... doing a [make install]{.title-ref} of a package to my system. [make uninstall]{.title-ref} is so unreliable, and I like to keep my system as a combination of system packages (so, stuff through apt/dpkg) and stuff I've built but keep in my user's home directory. (Though, adding Guix to the mix now adds a third middle ground!) Plus, what if I want to hack on Guix, it seems like being able to have Guix use the code straight from my dev checkout is best, right?

Dave Thompson suggested that I do what he do: run Guix on top of Debian without a [make install]{.title-ref}, straight from the git repo. This was indeed what I wanted to do, but it took me a while to figure out the exact process to get that to work (especially in getting the emacs mode to work with this), so I'm documenting that here... maybe it can be helpful to someone else as well?

So we're mostly going to follow the install docs up until the point where we do a [make install]{.title-ref}, where we obviously won't. So:

# git clone the repo and cd into it
$ git clone git://git.savannah.gnu.org/guix.git
$ cd guix

# Install appropriate dependencies... insert your distro package manager here
# Not actually sure if -dev needed on libgcrypt20 or not :)
$ sudo apt-get install guile-2.0 libgcrypt20 libgcrypt20-dev build-essential \
    guile-gnutls guile-2.0-dev libsqlite3-dev pkg-config

# Build the package
$ ./bootstrap && ./configure && make

# Make the "worker users" and their group... this allows the daemon
# to offload package building while keeping things nicely contained
$ sudo groupadd guix-builder
$ for i in `seq 1 10`; do
    sudo useradd -g guix-builder -G guix-builder           \
                 -d /var/empty -s `sudo which nologin`          \
                 -c "Guix build user $i" --system          \
                 guix-builder$i;
  done

# Make the /gnu/store directory, where packages are kept/built
$ sudo mkdir -p /gnu/store
$ sudo chgrp guix-builder /gnu/store
$ sudo chmod 1775 /gnu/store

(Supposedly there's a way to run things without the extra users and [/gnu/store]{.title-ref}, but you lose a lot of benefits... personally it seems to me that having the store and the daemon is worth it.)

Okay, you're done! Now you should be able to run guix straight from your checkout. But you can't run it directly, you need to use the [./pre-inst-env]{.title-ref} script which has many environment variables set... after all, we aren't doing a make install, but guix needs to know where all our stuff is! First let's start the daemon... it runs as root, but don't worry, all package building happens as the guix-builder users we set up above:

$ sudo ./pre-inst-env guix-daemon --build-users-group=guix-builder

So, maybe leave that running in a screen session, or daemonize it, or something, but leave it running. For now we're just testing, so whatever.

This is optional, but you may want to "authorize" the Guix substitutes repository. (Thanks to [mthl]{.title-ref} in [#guix]{.title-ref} on freenode for pointing this out!) Otherwise, Guix will spend a lot of time rebuilding things itself that have already been built elsewhere:

$ sudo ./pre-inst-env guix archive --authorize < hydra.gnu.org.pub

Now we can actually run guix! (Maybe from another terminal?):

# a bunch of stuff should run by... but the package will install
$ ./pre-inst-env guix package -i hello

$ ~/.guix-profile/bin/hello
Hello, world!

Great! It ran. But okay... all three of the last commands were annoying. We probably want a way to launch these much more easily, so let's add aliases for [guix-daemon]{.title-ref} and [guix]{.title-ref}, as well as putting our user's guix profile on our [PATH]{.title-ref}. (Yes, that's right... different users can have different packages installed with guix pretty easily.) So, let's edit our [~/.bashrc]{.title-ref} and add the following lines:

function guix-enable() {
    # Guix stuff

    alias guix="~/devel/guix/pre-inst-env guix"
    alias guix-daemon="sudo ~/devel/guix/pre-inst-env guix-daemon --build-users-group=guix-builder"

    # add guix's bin to the path
    export PATH=$HOME/.guix-profile/bin:$PATH
    # and others
    export PYTHONPATH="$HOME/.guix-profile/lib/python3.4/site-packages"
    export GUILE_LOAD_PATH="$GUILE_LOAD_PATH:$HOME/.guix-profile/share/guile/site/2.0/"
    export GUILE_LOAD_COMPILED_PATH="$GUILE_LOAD_PATH:$HOME/.guix-profile/share/guile/site/2.0/"
}

You can [source ~/.bashrc]{.title-ref} if you have existing terminals open, or start new terminals / bash sessions... whatever. :) Anyway, it should now be much easier to run things:

$ guix-enable   # run this whenever you want to play with guix things
$ guix package -i xeyes
$ xeyes

Okay, great! So, now we have a reasonable hacking setup, right?

Well, assuming you want to be using Guix as the emacs of distros, you probably want to take advantage of Guix's great emacs integration.

Unfortunately, I found the docs on this subject frustratingly did not work with the method used above. Luckily, I got some help on IRC, and with the following additions to my ~/.emacs, things work:

(add-to-list 'load-path "/home/cwebber/devel/guix/emacs")

(setq guix-guile-program '("/home/cwebber/devel/guix/pre-inst-env" "guile"))

(setq guix-load-path "/home/cwebber/devel/guix/emacs")

(require 'guix-init)

Obviously, change directory names as appropriate.

Now [M-x guix-all-available-packages]{.title-ref} should work!

Hopefully I'll have more to say on why I think Guix is pretty interesting, but at the very least, it might be compelling to hear that Guix can be used as a sort of language-agnostic virtualenv. Pretty cool!

Hopefully that helps someone else out there!

Life Update: January 2015

By Christine Lemmer-Webber on Tue 13 January 2015

Hey, so where did 2014 go, amirite guys? Amirite??

2014 in bulleted list form:

  • Most stressful year of my life (and that includes the years I both worked full time and went to school full time), but not bad: high peaks and low valleys even out to a good year, but turbulent. Not just high and low events contributing to stress, also much stress has been ambient from ongoing and difficult events, but much of it not really befitting describing on this blog.

  • MediaGoblin campaign went well, but I am tired of doing crowdfunding campaigns. Probably the last I will ever run... or at least the last for a long while. Nearly 5 months from start to wrapup of 60 hour high-stress weeks. But again, it went well! And hey, that video came out pretty great.

  • Hiring Jessica was the smartest move we could have made, and I'm glad we made it.

  • MediaGoblin federation work is going well; next release should make that clearer I hope.

  • Both Jessica and I are on the W3C Social Working Group trying to standardize federation, and I'm excited about that.

  • Hiring Jessica is great, but what to do about my own income? Happy to say I've started contracting for Open Tech Strategies who are great. Working under Karl Fogel is wonderful, and so is contracting for an org that mandates all code written there be free software. Plus, contracting 10 hours a week means I have plenty of other (including MediaGoblin) time left over.

  • Also it's great to have a boss again who is reviewing my performance and seems happy with my work, especially when that boss is almost certainly more technically capable than I am; I forgot how much external affirmation from that is helpful in probably some base human way. I had some great bosses in the not too distant past, and while I think I'm a pretty decent boss to others, Morgan has pointed out that I am a really mean boss to myself.

  • Despite ambient stress for both of us, Morgan and I's relationship goes well, maybe this year better than ever.

  • Got nerdier, started playing tabletop role playing games with friends a lot. Board games too.

  • Living in Madison is good.

  • We are currently caring for a dog since another family member can't keep her where she is staying. Aside from temporary dog-sitting, I've never lived somewhere with a dog that I am caring for... it's interesting.

  • The first half of the year was crazy due to the MediaGoblin campaign (again, I think it went great, and I had lots of help from great friends, just stressful), the second half crazy due to... well, a pile of things that are too personal for a blog (yeah I know I already said that). But everything came to a head right at the end of the year. This year burnt me the hell out.

  • This made me pretty useless in December, and makes me feel terrible because I pushed vocally for a MediaGoblin release and failed to hold up my end of things to make it happen. I need to get back on track. This will happen, but in the meanwhile, I feel shitty.

  • Burnout recovery has been productive; an odd thing to say maybe, but I seem to be getting a lot done, but not always on the things I think I should be.

  • I feel more confident in myself as a programmer than before... I've always felt a large amount of impostor syndrome because I don't really have a computer science degree... I'm a community trained hacker (not to knock that, but it's hard to not feel insecure because of it).

    But this year I did some cool things, including getting patches in to a fun language, and I worked on an actor model system that I think has a hell of a lot of promise if I could just get the damned time for it. (If only I had time to solve error propagation and the inter-hive-communication demos...) I did every exercise in The Little Schemer (a real joy to work through) and I feel like hey, I finally understand how to write code recursively in a way that feels natural. And it turns out there is a MELPA-installable texinfo version of SICP and I've been slowly working my way through it when I'm too tired to do anything else but want to pretend to be productive (which has been a lot of the last month). Still so much to learn though, but I appreciate the bottomless well aspect of programming.

  • Aside from the MediaGoblin campaign video, not a lot of artwork done this year. Hrm.

  • A couple of friends this year have made the "I've been doing nothing but python webdev for years and I need to mix it up" and to those friends: I hear you. Maybe hence the above?

  • Aside from MediaGoblin I've been doing a lot more chipping away at tiny bits of some free software projects, but maybe nothing significant enough to blog about yet, but there's a deployment system in there and a game thing and some other stuff. Nothing MediaGoblin sized, though. (Whew!)

  • Enjoying learning functional reactive programming (and expanding my understanding of Scheme) with Sly. Unfortunately still under-documented, but it's getting better, and davexunit is answering lots of questions for me. I might write a tutorial, or tutorials, soon.

  • Another ascii art logo for a FOSS project I made got vectorized and made way better than my original version, but that deserves its own post.

  • I continue to be able to work on free software full time, which is great.

I feel like the start of 2015 has already been moving upward, and has been much less stressful than the end of 2014. I hope that curve can keep moving up. And I hope I can keep up what I feel, despite me nearly going insane from various facets of it, is a fairly productive life.