Chicago GNU/Linux talk on Guix retrospective

By Christine Lemmer-Webber on Thu 01 October 2015

GuixSD logo

Friends... friends! I gave a talk on Guix last night in Chicago, and it went amazingly well. That feels like an undersell actually; it went remarkably well. There were 25 people, and apparently there was quite the waitlist, but I was really happy with the set of people who were in the room. I haven't talked about Guix in front of an audience before and I was afraid it would be a dud, but it's hard to explain the reaction I got. It felt like there was a general consensus in the room: Guix is taking the right approach to things.

I didn't expect this! I know some extremely solid people who were in the audience, and some of them are working with other deployment technologies, so I expected at some point to be told "you are wrong", but that moment didn't come. Instead, I met a large amount of enthusiasm for the subject, a lot of curious questions, and well... there was some criticism of the talk, though it mostly came to presentation style and approach. Also, I had promised to give two talks, both about federation and about Guix, but instead I just gave the latter and splatted over the latter's time. Though people seemed to enjoy themselves enough that I was asked to come back again and give the federation talk as well.

Before coming to this talk, I had wondered whether I had gone totally off the yaks in heading in this direction, but giving this talk was worth it in at least that the community reaction has been a huge confidence booster. It's worth persuing!

So, here are some things that came out of the talk for me:

  • The talk was clear, and generally people said that though I went into the subject to quite some depth, things were well understood and unambiguous to them.
  • This is important, because it means that once people understood the details of what I was saying, it gave them a better opportunity to evaluate for whether it was true or not... and so the general sense of the room that this is the right approach was reassuring.
  • A two tier strategy for pushing early adoption with "practical developers" probably makes sense:

    • Developers seem really excited about the "universal virtualenv" aspect of Guix (using "guix environment") and this is probably a good feature to start gaining adoption.
    • Getting GuixOps working and solidly demonstrable
  • The talk was too long. I think everything I said was useful, but I literally filled two talk slots. There are some obvious things that can be cut or reduced from the talk.
  • In a certain sense, this is also because the talk was not one, but multiple talks. Each of these items could be cut to a brief slide or two and then expanded into its own talk:

    • An intro to functional programming. I'm glad to see this this intro was very clear, and though concise, could be reduced within the scope of this talk to two quick slides rather than four with code examples.
    • An "Intro to Guile"
    • Lisp history, community, and its need for outreach and diversity
    • "Getting over your parenthesis phobia"
  • I simply unfolded an orgmode tree while presenting the talk, and while this made things easy on me, it's not very interesting for most audience members (though my friend Aeva clearly enjoyed it)

Additionally, upon hearing my talk, my friend Karl Fogel seemed quite convinced about Guix's direction (and there's few people whose analysis I'd rate higher). He observed that Guix's fundamentals seem solid, but that what it probably needs is institutional adoption at this point to bring it to the next level, and he's probably right. He also pointed out that it's not too much for an organization to invest themselves in Guix at this point, considering that developers are using way less stable software than Guix to do deployments. He suggested I try to give this talk at various companies, which could be interesting... well, maybe you'll hear more about this in the future. Maybe as a trial run I should submit some podcast episodes to Hacker Public Radio or something!

Anyway, starting next week I'm putting my words to action and working on doing actual deployments using Guix. Now that'll be interesting to write about! So stay tuned, I guess!

PS: You can download the orgmode file of the talk or peruse the html rendered version or even better check out my talks repo!

Userops Acid Test v0.1

By Christine Lemmer-Webber on Sun 27 September 2015

Hello all!

So a while ago we started talking about this userops thing. Basically, the idea is "deployment for the people", focusing on user computing / networking freedom (in contrast to "devops", benefits to large institutions are sure to come as a side effect, but are not the primary focus. There's kind of a loose community surrounding the term now, and a number of us are working towards solutions. But I think something that has been missing for me at least is something to test against. Everyone in the group wants to make deployment easiser. But what does that mean?

This is an attempt to sketch out requirements. Keep in mind that I'm writing out this draft on my own, so it might be that I'm missing some things. And of course, some of this can be interpreted in multiple ways. But it seems to me that if we want to make running servers something for mere mortals to do for themselves, their friends, and their families, these are some of the things that are needed:

  1. Free as in Freedom:

I think this one's a given. If your solution isn't free and open source software, there's no way it can deliver proper network freedoms. I feel like this goes without saying, but it's not considered a requirement in the "devops" world... but the focus is different there. We're aiming to liberate users, so your software solution should of course itself start with a foundation of freedom.

  • Reproducible:
  • It's important to users that they be able to have the same system produced over and over again. This is important for experimenting with a setup before deployment, for ensuring that issues are reproducible and friends and communities helping each other debug problems when they run into them. It's also important for security; you should be able to be sure that the system you have described is the system you are really running, and that if someone has compromised your system, that you are able to rebuild it. And you shouldn't be relying on someone to build a binary version of your system for you, unless there's a way to rebuild that binary version yourself and you have a way to be sure that this binary version corresponds to the system's description and source. (Use the source, Luke!)

    Nonetheless, I've noticed that when people talk about reproducibility, they sometimes are talking about two distinct but highly related things.

    1. Reproducible packages:

    The ability to compile from source any given package in a distribution, and to have clear methods and procedures to do so. While has been a given in the free software world for a long time, there's been a trend in the devops-type world towards a determination that packaging and deployment in modern languages has gotten too complex, so simply rely on some binary deployment. For reasons described above and more, you should be able to rebuild your packages... *and* all of your packages' dependencies... and their dependencies too. If you can't do this, it's not really reproducible.

    An even better goal is to guarantee not only that packages can be built, but that they are byte-for-byte identical to each other when built upon all their previous dependencies on the same architecture. The Debian Reproducibility Project is a clear example of this principle in action.

  • Reproducible systems:
  • Take the package reproducibility description above, and apply it to a whole system. It should be possible to, one way or another, either describe or keep record of (or even better, both) the system that is to be built, and rebuild it again. Given selected packages, configuration files, and anything else that is not "user data" (which is addressed in the next section), it should be possible to have the very same system that existed previously.

    As with many things on this list, this is somewhat of a gradient. But one extrapoliation, if taken far enough, I believe is a useful one (and ties in with the "recoverable sytem" part): systems should not be necessarily dependent upon the date and time they are deployed. That is to say, if I deployed a system yesterday, I should be able to redeploy that same system today on an entirely new system using all the packages that were installed yesterday, even if my distribution now has newer, updated packages. It should be possible for a system to be reproducible towards any state, no matter what fixed point in time we were originally referring to.

  • Recoverable:
  • Few things are more stressful than having a computer that works, is doing something important for you, and then something happens... and suddenly it doesn't, and you can't get back to the point where your computer was working anymore. Maybe you even lost important data!

    If something goes wrong, it should be possible to set things right again. A good userops system should do this. There are two domains to which this applies:

    1. Recoverable data:

    In other words, backups. Anything that's special, mutable data that the user wants to keep fits in this territory. As much as possible, a userops system should seek to make running backups easy. Identifying based on system configuration which files to copy and helping to provide this information to a backup system, or simply only leaving all mutable user data in an easy-to-back-up location would help users from having to determine what to back up on their own, which can be easily overwhelming and error-prone for an individual.

    Some data (such as data in many SQL databases) is a bit more complex than just copying over files. For something like this, it would be best if a system could help with setting up this data to be moved to a more appropriate backup serialization.

  • Recoverable system:
  • Linking somewhat to the "reproducible system" concept, a user should be able to upgrade without fear of becoming stuck. Upgrade paralysis is something I know I and many others have experienced. Sometimes it even appears that an upgrade will go totally fine, and you may have tested carefully to make sure it will, but you do an upgrade, and suddenly things are broken. The state of the system has moved to a point where you can't get back! This is a problem.

    If a user experiences a problem in upgrading their system software and configuration, they should have a good means of rolling back. I believe this will remove much of the anxiety out of server administration especially for smaller scale deployments... I know it would for me.

  • Friendly GUI
  • It should be possible to install the system via a friendly GUI. This probably should be optional; there may be lower level interfaces to the deployment system that some users would prefer to use. But many things probably can be done from a GUI, and thus should be able to be.

    Many aspects of configuring a system require filling in shared data between components; a system should generally follow a Don't Repeat Yourself type philosophy. A web application may require the configuration details of a mail transfer agent, and the web application may also need to provide its own details to a web server such as Nginx or Apache. Users should have to fill in these details in one place each, and they should propagate configuration to the other components of the system.

  • Scriptable
  • Not everyone should have to work with this layer directly, but everyone benefits from scriptability. Having your system be scriptable means that users can properly build interfaces on top of your system and additional components that extend it beyond directions you may be able to do directly. For example, you might not have to build a web interface yourself; if your system exposes its internals in a language capable enough of building web applications, someone else can do that for you. Similarly with provisioning, etc.

    Working with the previous section, bonus points if the GUI can "guide users" into learning how to work with more lower level components; the Blender UI is a good example of this, with most users being artists who are not programmers, but hovering over user interface elements exposes their Python equivalents, and so many artists do not start out as developers, but become so in working to extend the program for their needs bit by bit. (Emacs has similar behavior, but is already built for developers, so is not as good of an example.) "Self Extensibility" is another way to look at this.

  • Collaboration friendly:
  • Though many individuals will be deploying on their own, many server deployments are set up to serve a community. It should be possible for users to help each other collaborate on deployment. This may mean a variety of things, from being able to collaborate on configuration, to having an easy means to reproduce a system locally.

    Additionally, many deployments share steps. Users should be able to help each other out and share "recipes" of deployment steps. The most minimalist (and least useful) version of this is something akin to snippet sharing on a wiki. Most likely, wikis already exist, so more interestingly, it should be possible to share deployment strategies via code that is proceduralized in some form. As such, in an ideal form, deployment recipes should be made available similar to how packages are in present distributions, with the appropriate slots left open for customization for a particular deployment.

  • Fleet manageable:
  • Many users have not one but many computers to take care of these days. Keeping so many systems up to date can be very hard; being able to do so for many systems at once (especially if your system allows them to share configuration components) can help a user actually keep on track of things and lead to less neglected systems.

    There may be different sets, or "fleets", of computers to take care of... you may find that a user discovers that she needs to both take care of a set of computers for her (and maybe her loved ones') personal use, but she also has servers to take care of for a hobby project, and another set of servers for work.

    Not all users require this, and perhaps this can be provided on another layer via some other scripting. But enough users are in "maintainance overload" of keeping track of too many computers that this should probably be provided.

  • Secure
  • One of the most important and yet open ended requirements, proper security is critical. Security decisions usually involve tradeoffs, so what security decisions are made is left somewhat open ended, but there should be a focus of security within your system. Most importantly, good security hygeine should be made easy for your users, ideally as easy or easier than not following good hygeiene.

    Particular areas of interest include: encrypted communication, preferring or enforcing key based authentication over passwords, isolating and sandboxing applications.

    To my knowledge, at this time no system provides all the features above in a way that is usable for many everyday users. (I've also left some ambiguity in how to achieve these properties above, so in a sense, this is not a pass/fail type test, but rather a set of properties to measure a system against.) In an ideal future, more Userops type systems will provide the above properties, and ideally not all users will have to think too much about their benefits. (Though it's always great to give the opportunity to users who are interested in thinking about these things!) In the meanwhile, I hope this document will provide a useful basis for implementing and thinking about mapping one's implementation against!

    Wisp: Lisp, minus the parentheses

    By Christine Lemmer-Webber on Wed 23 September 2015

    Arne Babenhauserheide has built a really cool syntax alternative for Scheme, Wisp (not to be confused with a different lisp-related-wisp), or in standards version, SRFI 119. It looks pretty nice:

    ;; hello world example
    display                             ;    (display
      string-append "Hello " "World!"   ;      (string-append "Hello " "World!"))
    display "Hello Again!"              ;    (display "Hello Again!")
    
    ;; hello world function
    define : hello who                  ;    (define (hello who)
      display                           ;      (display 
        string-append "Hello " who "!"  ;        (string-append "Hello " who "!")))
    

    Actually, let's see that in emacs, just to be sure.

    Wisp and hello world

    How about something slightly more substantial? How about a real life Guix package for GNU Grep:

    Wisp, Emacs, Guix and Grep

    Wow, not bad... not bad at all! I'd say that's quite readable! (Too bad the lines don't line up exactly in that screenshot; that's not the code but rather my emacs theme bolding the wisp code.)

    What's nice is that unlike most s-expression alternatives, it doesn't lack any of the power of Lisp; it's "just lisp" with the parentheses hidden by vaguely pythonesque indentation, which means even macros work.

    Now me personally? I've learned to love the parens, and there's nothing that beats an editor that knows how to do cool structural s-expression editing and navigation. But I admit that learning to read through all the parentheses was a tough thing for me initally, and certainly for many others. Maybe this can help boil the lisp frog for some.

    Now what would really be hylarious would be to port this to Hy...

    More careful exceptions in Guile

    By Christine Lemmer-Webber on Sat 05 September 2015

    So as I've probably said before, I've been spending a lot more time hacking in Guile lately. I like it a lot!

    However, there is one thing that really irks me: error handling. Though a programmer in Guile has a lot of flexibility to define their own error handling mechanisms, really I think a language should be providing good builtin ways of doing so. Guile does provide some builtin methods, but I have problems with both of them.

    The first is the more egregious of the two, and is a procedure known simply as error, which takes one argument: a string describing what went wrong. Usage looks like so:

    (if (something-bad? thing)
      (error "You shouldn't have done that!"))

    This is fast to toss through your code without thinking, but at serious cost. The problem is that this follows the "diaper pattern" (or "diaper antipattern?"). Guile provides a catch procedure, but if you try catching these errors, they are all thrown with the "misc-error" symbol, and there is no way to catch the right errors.

    (catch 'misc-error
      ;; the code we're running
      (lambda ()
        (let ((http-response (get-some-url)))
          (if http-response
              ;; all went well, continue with our webby things
              (do-web-things http-response)
              ;; Uhoh!
              (error "the internet's tubes are filled"))))
      ;; The code to catch things
      (lambda _ (display "sorry, someone broke the internet\n")))

    But wait... what if the user gave a keyboard interrupt and instead your database execution code caught it instead? I you can't catch errors precisely, things might bubble to the wrong place.

    This is not an abstract problem; this happened to me in an extremely well written Guile program, Guix: I was working on adding a new package and had screwed up the definition, so somewhere up the chain Guix threw an error about my malformed package, but I didn't know... instead, when I was attempting to run the "guix package" command to test out my command, suddenly the "guix package" command disappeared entirely. Whaaaaat? I did some debugging and found a (catch 'misc-error) in the command line arguments handling code. Whew! Well, that usage of "(error)" got replaced with some more careful code, but what if I couldn't find it, or was a more green developer?

    So, luckily, Guile does provide a better exception handling system, mostly. There's throw, which looks a bit like this in your code:

    (catch 'http-tubes-error
      ;; the code we're running
      (lambda ()
        (let ((http-response (get-some-url)))
          (if http-response
              ;; all went well, continue with our webby things
              (do-web-things http-response)
              ;; Uhoh!
              (throw 'http-tubes-error "the internet's tubes are filled"))))
      ;; The code to catch things
      (lambda _ (display "sorry, someone broke the internet\n")))

    Okay, great! This is much more specific, yay!

    Except... it still kind of bothers me. Maybe I'm being overly pedantic here, but what if you and I both had 'json-error exceptions in our own separate libraries? The problem is (unlike in common lisp) there aren't module-specific symbols in Guile! This means we could catch someone else's 'json-error when we really wanted to catch our own.

    Okay, maybe this is rare, but I really don't like running into these kinds of problems. I want my exception symbols to be unique per package, damnit!

    So in the interest of doing so, let me present you with a terrible hack of scheme code (which like all other code content in this blogpost, I both waive under CC0 1.0 Universal (and also do waive any potential patent "rights") and also release under LGPLv3 or later, your choice):

    (define-syntax-rule (define-error-symbol error-symbol)
      (define error-symbol
        (gensym 
         ;; gensym can take a prefix
         (symbol->string (quote error-symbol)))))

    Okay, it's kind of hacky, but what this does is give you a nice convenient way to define unique symbols. (Edit: turns out gensym can take a prefix, so the above code is even easier and less hacky now! Thanks for the tip, taylanub!) You can use it like so:

    (define-error-symbol http-tubes-error)
    
    (catch http-tubes-error
      ;; the code we're running
      (lambda ()
        (let ((http-response (get-some-url)))
          (if http-response
              ;; all went well, continue with our webby things
              (do-web-things http-response)
              ;; Uhoh!
              (throw http-tubes-error "the internet's tubes are filled"))))
      ;; The code to catch things
      (lambda _ (display "sorry, someone broke the internet\n")))

    See? All you have to do is do a simple definition above and you have a unique-per-your-program error symbol (thanks to the gensym). Now if users want to catch your errors, but only your errors, they can import the error symbol directly from your package.

    So the lesson from this post is: if you're going to use exceptions in your code, please be careful... and specific!

    Update: Apparently I can't be the only one who finds the need for this; turns out that prompts (which have a similar "unwinding" property to exceptions) also take symbols, but usefully there's (make-prompt-tag) which does pretty much exactly the same thing as define-error-symbol above. So I must not be totally crazy!

    In which I receive the O'Reilly Open Source Award

    By Christine Lemmer-Webber on Tue 01 September 2015

    Well, I'm late in putting this one out there, but it's still worth putting on the record! About a month ago, I was fortunate enough to receive the O'Reilly Open Source Award. In fact, here's a picture of me receiving it!

    receiving the award
    Photo taken by Brandin Grams, CC BY 4.0, originally microblogged by Karen Sandler

    (... there's a video too!)

    So, getting the award was exciting and unexpected. Exciting, because just look at the list of people who have received this award previously!

    Even to just share the stage with friends Marina Zhurakhinskaya of Outreachy, Stefano Zacchiroli of Debian, and Sarah Mei was a huge honor (okay, Sarah Mei is not someone I know, but I know very well of Railsbridge and the many ways it has paved forward diversity initiatives in free software). Plus, even though I don't know much about Hadoop, I know others who do, and the looming head of Doug Cutting behind us in videorecording excited them! And looking at former recipients, nearly everyone on this list is a person who has made humongous strides in shaping the world of free and open source software, and let's face it, almost everyone on this list has done more in that capacity than I have.

    Which leads me to the surprise part... I was really not expecting this award, so when I saw the email informing me I was being awarded it I began to flag it as spam, assuming it was something that snuck past spamassassin. But wait, I actually know this award! I then preceded to try to get Morgan's attention (she was on the phone) by exuberantly waving my arms, which lead our dog to jump up and also wave her forelegs in a very similar fashion.

    So that is all to say I'm very honored to have received the thing!

    So, then I knew I received the award, but didn't know for what text I was nominated until the ceremony, but I was really happy with it when I heard it said: for my free software advocacy (and with those words) and my work on GNU MediaGoblin (GNU was pronounced with each letter as if a university chant... G! N! U!). I suspected as much and I'm glad I received the award under that description.

    And that is also to say I am really glad for the recognition!

    The recognition does help, in some way, as a counterbalance to other feelings. In another sense, I also look at the list of other people who have received the award and I feel a bit embarrassed because I think maybe I don't deserve it. I had confided in this to some friends (I guess it's no longer confiding now that I'm writing in a blogpost), and the response has mostly been something along the lines of "of course you do, just feel good about the thing", and of course that's also the kind of thing you want to hear from your friends, and it would be pretty embarrassing if they didn't say those kinds of things in response, and you begin to wonder that given that if you're playing some sort of scripted roles, and typing this I even wonder if this "I don't deserve it" text is doing that too, and maybe it is, maybe everything is... but I still think sometimes I am fooling everyone and when I am given recognition that I am some kind of tricky person, tricky enough to trick people into giving me an award for a pile of incomplete things.

    I think there are a variety of reasons for this, one of the more obvious being that MediaGoblin is moving along but it isn't there yet, something I think nobody is more deeply aware of than myself. And people are still locked down by proprietary network services, we still don't have a generally agreed upon federation standard, and it is still really hard to run your own server. These aren't my battles alone and a good number of us are working on them, (and I even have interesting things to report in these areas that I have not yet blogged about) but when I think about things not being done I feel personally responsible for it. Part of the challenge also is that I do not generally look at my life as in terms of things I have accomplished, but in terms of the things I haven't, and obviously that is not a great strategy for feeling good about the things you've done.

    But let me turn this blogpost back around again, before I look like some kind of jerk who gets an award and then mopes about it!

    In this same sense the recognition has helped a lot. Not everything that I want to see done is done yet, but the O'Reilly Open Source Award selection process happens through previous winners, so clearly people who know better than I do have recognized that the things we've done and the direction we're heading are the right ones.

    You may notice that I'm switching between "I" and "we", so a bit more on that... it's worth noting that any accomplishments I have are connected to some significant free software community, so they're hardly just "my" accomplishments... that statement of indicating that the roads we're heading down are the right ones applies to all the people in the communities of which I've become some apparent type of minor figurehead. (Be wary of pedestals, with coordination like mine, a great way to dash oneself against the floor...)

    So! That's a lot of words there, but the crux it is that I'm still excited to have gotten the award, and as much of a worrier as I am, the recognition is especially nice as a kind of reassurance. In the meanwhile, I have a fancy new bookshelf and a nice little display for this hunk of glass:

    On the bookshelf

    On the bookshelf, zoomed in a bit

    O'Reilly award, on display

    Pretty cool hunk of glass, right? I think so!

    PS: John Sullivan snapped this image of Stefano Zacchiroli and I right after the ceremony, on a trip to Powell's books.

    Award Winners at Pow-ells
    "Award Winners" photo taken by John Sullivan CC BY-SA 4.0

    Why I Am Pro-GPL

    By Christine Lemmer-Webber on Tue 21 July 2015

    Last night at OSCON I attended the lightning talks (here called "Ignite Talks"). Most of them were pretty good (I especially loved Emily Dunham's "First Impressions (the value of the 'noob')" talk), but the last talk of the night was titled "Why I don’t use the GPL" by Shane Curcuru, "VP of Brand Management at the Apache Software Foundation" (the association of which he invoked during his talk last night, which made me wonder if he was speaking on behalf of the ASF, which seemed surprising). (Edit: this was confirmed to not have been intended to be speaking on behalf of the ASF, which is good to hear. I don't have a recording so I'm not sure if Shane invoked his association or if the person doing the introducing did.)

    It was a harsh talk. It was also the last talk of the night, and there was really no venue to respond to it (I looked to see if there would be future lightning talk slots at this conference, but there aren't). Though the only noise from the audience was applause, I know that doesn't mean everyone was happy, just polite... a number of my friends got up and left in the middle of the talk. But it needs a response... even if the only venue I have at the moment is my blog. That'll do.

    So let me say it up front: my name is Chris Lemmer-Webber, and I am pro-GPL and pro-copyleft. Furthermore, I'm even pro-permissive (or "lax") licensing; I see no reason our sides should be fighting, and I think we can work together. This is one reason why this talk was so disappointing to me.

    There's one particular part of the talk that really got to me though: at one point Shane said something along the lines of "I don't use copyleft because I don't care about the source code, I care about the users." My jaw dropped open at that point... wait a minute... that's our narrative. I've written on this before (indeed, at the time I thought that was all I had to say on this subject, but it turns out that's not true), but the most common version of anti-copyleft arguments are a "freedom to lock down" position (see how this is a freedom to remove freedoms position?), and the most common form of pro-copyleft arguments are a "freedom for the end-user" position.

    Now there is an anti-copyleft position which does take a stance that copyleft buys into a nonfree system -- you might see this from the old school BSD camps especially -- a position that copyright itself is an unjust system, and to use copyright at all, even to turn the mechanisms of an evil machine against itself as copyleft does, is to support this unjust system. I can respect this position, though I don't agree with it (I think copyleft is a convenient tactical move to keep software and other works free). One difficulty with this position though is to really stay true to it, you logically are against proprietary software far more than you are against copyleft, and so you had better be against all those companies who are taking permissively licensed software and locking it down. This is decidedly not the position that Shane took last night: he explicitly referenced that the main reason you want to use lax licensing and avoid copyleft is it means that businesses are more willing to participate. Now, there are a good number of businesses which do work with copyleft, but I agree that anti-copyleft sentiments are being pushed from the business world. So let me parse that phrasing for you: copyleft means that everyone has to give back the changes that build upon your work, and not all businesses want to do this. The "businesses are more willing to participate" means that businesses can use your project as a stepping stone for something they can lock down. Some businesses are looking for a "proprietary differentiation point" to lock down software and distinguish themselves from their competitors.

    As I said, I am not only pro-copyleft, I am also pro-permissive licensing. The difference between these is tactics: the first tactic is towards guaranteeing user freedom, the second tactic is toward pushing adoption. I am generally pro-freedom, but sometimes pushing adoption is important, especially if you're pushing standards and the like.

    But let's step back for a moment. One thing that's true is that over the last many years we've seen an explosion of free and open source software... at the same time that computers have become more locked down than ever before! How can this be? It seems like a paradox; we know that free and open source software is supposed to free users, right? So why do users have less freedoms than ever? Mobile computing, the rise of the executable web, all of this has FOSS at its core, and developers seem to enjoy a lot of maneuverability, but computers seem to be telling us more what we can and can't do these days than we tell them. And notice... the rise of the arguments for permissive/lax licensing have grown simultaneously with this trend.

    Free Speech Zone by Mustafa and Aziza
    Free Speech Zone by Mustafa and Aziza, CC BY-SA 2.0

    This is no coincidence. The fastest way to develop software which locks down users for maximum monetary extraction is to use free software as a base. And this is where the anti-copyleft argument comes in, because copyleft may effectively force an entity to give back at this stage... and they might not want to.

    In Shane's talk last night, he argued against copyleft because software licenses should have "no strings attached". But the very strategy that is advocated above is all about attaching strings! Copyleft's strings say "you can use my stuff, as long as you give back what you make from it". But the proprietary differentiation strategy's strings say "I will use your stuff, and then add terms which forbid you to ever share or modify the things I build on top of it." Don't be fooled: both attach strings. But which strings are worse?

    To return to the arguments made last night, though copyleft defends source, in my view this is merely a strategy towards defending users. And indeed, as in terms of where freedoms lie between those who make use of the source and code side of things vs the end-user-application side of things, one might notice a trend: there are very few permissively licensed projects which aim at end users. Most of them are stepping stones towards further software development. And this is great! I am glad that we have so many development tools available, and it seems that permissive/lax licensing is an excellent strategy here. But when I think of projects I use every day which are programs I actually run (for example, as an artist I use Blender, Gimp and Inkscape regularly), most of these are under the GPL. How many truly major end-user-facing software applications can you think of that are under permissive licenses? I can think of many under copyleft, and very few under permissive licenses. This is no coincidence. Assuming you wish to fight for freedom of the end user, and ensure that your software remains free for that end user, copyleft is an excellent strategy.

    I have heard a mantra many times over the last number of years to "give away everything but your secret sauce" when it comes to software development. But I say to you, if you really care about user freedom: give away your secret sauce. And the very same secret sauce that others wish to lock down, that's the kind of software I tend to release under a copyleft license.

    There is no reason to pit permissive and copyleft licensing against each other. Anyone doing so is doing a great disservice to user freedom.

    My name is Chris Lemmer-Webber. I fight for the users, and I'm standing up for the GPL.

    Addendum: Simon Phipps points out that all free licenses are "permissive" in a sense. I agree that "permissive" is a problematic term, though it is the most popular term of the field (hence my inclusion also of the term "lax" for non-copyleft licenses). If you are writing about non-copyleft licenses, it is probably best to use the term "lax" licenses rather than "permissive".

    Let's Package jQuery: A Javascript Packaging Dystopian Novella

    By Christine Lemmer-Webber on Fri 01 May 2015

    The state of packaging of libre web applications is, let's face it, a sad one. I say this as one of the lead authors of one of such a libre web application myself. It's just one component of why deploying libre web applications is also such a sad state of affairs (hence userops). It doesn't help that, for a long time, the status quo in all free software web applications (and indeed all web applications) was to check javascript and similar served-to-client web assets straight into your repository. This is as bad as it sounds, and leads to an even further disconnect (one of many) between the packages that a truly free distro might include (and have to manually link in after the fact) and those of your own package. Your package is likely to become stuck on a totally old version of things, and that's no good.

    So, in an effort to improve things, MediaGoblin and many other projects have kicked the bad habit of including such assets directly in our repository. Unfortunately, the route we are taking to do this in the next release is to make use of npm and bower. I really did not want to do this... our docs already include instructions to use Python's packaging ecosystem and virtualenv, which is fine for development, but since we don't have proper system packaging, this means that this is the route users go for deployment as well. Which I guess would be fine, except that my experience is that language package managers break all the time, and when they break, they generally require an expert in that language to get you out of whatever mess you're in. So we added more language package management features... not so great. Now users are even more likely to hit language package management problems, now also ones that our community are less of experts in helping debug.

    But what can we do? I originally thought of home-rolling our own solution, but as others rightly pointed out, this would be inventing our own package manager. So, we're sucking it up and going the npm/bower route.

    But wait! There may be a way out... recently I've been playing with Guix quite a bit, and I came to realize that, at least for myself in development, it could be nice to have all the advantages of transactional rollbacks and etc. There is a really nice feature in Guix called guix environment which is akin to a "universal virtualenv" (also similar to JHBuild in Gnome land, but not tied to Gnome specifically)... it can give you an isolated environment for hacking, except not just restricted to Python or Javascript or Ruby or C... great! (Nix has something similar called nix-shell.) I know that I can't expect that Guix is usable for everyone right now, but for many, maybe this could be a nice replacement for Virtualenv + Bower, something I wrote to the mailing list about.

    (As an aside, the challenge wasn't the "virtualenv" type side of things (pulling in all the server-side dependencies)... that's easy. The challenge is replacing the Bower part: how to link in all the statically-served assets from the Guix store right into the package? It's kind of a dynamic linking problem, but for various reasons, linking things into the package you're working on is not really easy to do in a functional packaging environment. But thanks to Ludo's advice and thanks to g-expressions, things are working!)

    I'm happy to say that today, thanks to the help from the list, I came up with such a Virtualenv + Bower replacement prototype using "guix environment". And of course I wanted to test this on MediaGoblin. So here I thought, well, how about just for tonight I test on something simple. How about jQuery? How hard could that be? I mean, it just compiles down to one file, jquery.js. (Well, two... there's also jquery.js.min...)

    Luckily, Guix has Node, so it has npm. Okay, the docs say to do the following:

    # Enter the jquery directory and run the build script:
    cd jquery && npm run build

    Okay, it takes a while... but it worked! That seemed surprisingly easy. Hm, maybe too easy. Remember that I'm building a package for a purely functional distribution: we can't have any side effects like fetching packages from the web, every package used has to be an input and also packaged for Guix. We need dependencies all the way up the tree. So let's see, are there any dependencies? There seems to be a [node_modules]{.title-ref} directory... let's check that:

    cwebber@earlgrey:~/programs/jquery$ ls node_modules/
    commitplease          grunt-contrib-uglify  grunt-npmcopy        npm                   sinon
    grunt                 grunt-contrib-watch   gzip-js              promises-aplus-tests  sizzle
    grunt-cli             grunt-git-authors     jsdom                q                     testswarm
    grunt-compare-size    grunt-jscs-checker    load-grunt-tasks     qunitjs               win-spawn
    grunt-contrib-jshint  grunt-jsonlint        native-promise-only  requirejs

    Yikes. Okay, that's 24 dependencies... that'll be a long night, but we can do it.

    Except, wait... I mean, there's nothing so crazy here as in dependencies having dependencies, is there? Let's check:

    cwebber@earlgrey:~/programs/jquery$ ls node_modules/grunt/node_modules/
    async          eventemitter2  glob               iconv-lite  nopt
    coffee-script  exit           grunt-legacy-log   js-yaml     rimraf
    colors         findup-sync    grunt-legacy-util  lodash      underscore.string
    dateformat     getobject      hooker             minimatch   which

    Oh hell no. Okay, jeez, just how many of these node_modules directories are there? Luckily, it's not so hard to check (apologies for the hacky bash pipes which are to follow):

    cwebber@earlgrey:~/programs/jquery$ find node_modules -name "node_modules" | wc -l
    158

    Okay, yikes. There are 158 dependency directories that were pulled down recursively. Wha?? To look at the list is to look at madness. Okay, how many unique packages are in there? Let's see:

    cwebber@earlgrey:~/programs/jquery$ find node_modules -name "node_modules" -exec ls -l {} \; | grep -v total | awk '{print $9}' | sort | uniq | wc -l
    265

    No. Way. 265 unique packages (the list in its full glory), all to build jquery! But wait... there were 158 [node_modules]{.title-ref} directories... each one of these could have its own repeat of say, the minimatch package. How many non-unique copies are there? Again, easy to check:

    cwebber@earlgrey:~/programs/jquery$ find node_modules -name "node_modules" -exec ls -l {} \; | grep -v total | awk '{print $9}' | wc -l
    493

    So, there's about double-duplication of all these packages here. Hrm... (Update: I have been told that there is an npm dedupe feature. I don't think this reduces the onerousness of packaging outside of npm, but I'm glad to hear it has this feature!)

    Well, there is no way I am compiling jQuery and all its dependencies in this state any time soon. Which makes me wonder, how does Debian do it? The answer seems to be, currently just ship a really old version from back in the day before npm, when you could just use a simple Makefile.

    Well for that matter then, how does Nix do it? They're also a functional package management system, and perhaps Guix can take inspiration there as Guix has in so many other places. Unfortunately, Nix just downloads the prebuilt binary and installs that, which in the world of functional package management is kind of like saying "fuck it, I'm out."

    And let's face it, "fuck it, I'm out" seems to be the mantra of web application packaging these days. Our deployment and build setups have gotten so complicated that I doubt anyone really has a decent understanding of what is going on, really. Who is to blame? Is it conventional distributions, for being so behind the times and for not providing nice per-user packaging environments for development? Is it web developers, for going their own direction, for not learning from the past, for just checking things in and getting going because the boss is leaning over your shoulder and oh god virtualenv is breaking again on the server since we did an upgrade and I just have to make this work this time? Whose fault is it? Maybe pinning blame is not really the best anyway, but I feel that these are conversations that we should have been having, for distributions and web applications to work together, at least a decade ago. And it's not just Javascript; we're hardly better in the Python world. But it's no wonder that the most popular direction of deployment is the equivalent of rolling a whole distro up into a static binary, and I don't have to tell you what a sad state that is.

    For me, at the moment, I'd like to be more conscious of what it takes to build software, not less. Reproducibility is key to long-term software freedom, else how can we be sure that the software we're running is really the software we say it is? But given all the above, it's hard to not have empathy for those who instead decide to toss in that towel and take a "fuck it, I'm out" approach to deployment.

    But I hope we can do better. In the meanwhile, ensuring that users can actually build and package from top to bottom the software I'm encouraging them to use is becoming more of a priority for me, not less.

    And I guess that may mean, if it isn't really feasible to reproduce your software, I can't depend on it in my own.

    Why is it hard to move from one machine to another? An analysis. [x-post from Userops]

    By Christine Lemmer-Webber on Wed 08 April 2015

    NOTE: This is a shameless cross-post of something I originally sent to the userops list, where we discuss deployment things.


    Hello all,

    For a while I've been considering, why is it so harder for me to migrate from server to server than it is for me to migrate from desktop to desktop? For years, ever since I discovered rsync, migrating between machines has not been hard. I simply rsync my home directory over to the new machine (or maybe even just keep the old /home/ directory's partition where it is!) and bam, I am done. Backing this up is easy; it's just another rsync away. (I use dirvish as a simple wrapper around rsync so it can manage incremental backups.)

    If I set up a new machine, it is no worry. Even if my current machine dies, it is mostly no worry. Rsync back my home directory, and done. I will spend a week or so discovering that certain programs I rely on are not there, and I'll install them one by one. In a way it's refreshing: I can install the programs I need, and the old cruft is gone!

    This is not true for servers. At the back of my mind I realized this, but until the end of Stefano Zacchiroli's excellent LibrePlanet talk when I posed a question surrounding this situation, I hadn't totally congealed in my head: why is it so much harder for me to move from server to server? Assume I even have the old server around and I want to move. It isn't easy!

    So here are some thoughts that come out of this:

    • For my user on my workstation, configuration and data are in the same place: /home/ (including lots of little dotfiles for the configs, and the rest is mostly data). Sure, there's some configuration stuff in /etc/ and data in /var/ but it mostly doesn't really matter, and copying that between machines is not hard.
    • Similarly, for my user on workstation experience, it is very little stress if I set up a machine and am missing some common packages. I can just install them again as I find them missing.
    • Neither of these are true for my server! In addition to caring about /home/, and even more importantly, I have to worry about configuration in /etc/ and data in /var/. These are both pains in the butt for me for different reasons.
    • Lots of stuff in /etc/ is configuration that interacts with the rest of the system in specific ways. I could rsync it to a new machine, but I feel like that's just blindly copying over stuff where I really need to know how it works and how it was set up with the rest of the machine in the first place.
    • This is compounded by the fact that people rarely set up one machine these days; usually they have to set up several machines for several users. Remembering how all that stuff worked is hard. The only solution seems to be to have some sort of reproducible configuration system. Hence the rise of salt, ansible, etc. But these aren't really "userops" systems, they're "devops"... developer focused. Not only do you need to know how they work, you need to know how the rest of the system works. And it's not easy to share that knowledge.
    • /var/ is another matter. Theoretically, most of my program data goes there (unless, of course, it went to /srv/, god help us). But I can't just rsync it! There are some processes that are very persnickety about the stuff there. I have to dump my databases and etc before I can move them or back up. Nothing sets up an automatic cronjob for me on these, I have to know to dump postgres. Hopefully I set up a cronjob!
    • While I as a workstation user don't stress too much if I'm missing some packages (just install them as I go), that is NOT true of my servers. If my mail servers aren't running, if jabber isn't on, (if SSH isn't running!!!), there are other servers expecting to communicate with my machine, and if I don't set them up, I miss out.
    • Not only this, assuming I have moved between servers correctly, even once I have set up my machine and it has become a perfectly okay running special snowflake, there are certain routine tasks that require a lot of manual intervention, and I have only picked up the right steps by knowing the right friends, having run across the right tutorials which hopefully have shown me the right setup, etc. SSL configuration, I'm looking at you; the only savior that I have is that I have written myself my own little orgmode notes on what to do the next time my certs expire.
    • My servers do become special snowflakes, and that is very stressful to me. I will, in the future, need to set up one more server, and remembering what I did in the past will be very hard.
    • Assuming I use all the mainstream tools, not talking about "upcoming" ones, a better configuration management solution is probably the answer, right? That's a lot to ask users though: it's not a solution to existing deployment, because it doesn't remove the need to know about all the layers underneath, it just adds a new layer to understand.

    Those are all headaches, and they are not the only headaches. But here are some thoughts on things that can help:

    • If I recognize which parts of my system are "immutable" and which parts are "mutable", it's easier to frame how my system works.

      • /var/ is mutable, it's data. There's no making this "reproducible" really: it needs to be backed up and moved around.

      • My packages and system are immutable, or mostly should be. Even if not using a perfectly immutable system like guix/nix, it's helpful to act like this part of the system is pseudo-immutable, and simply derived from some listing of things I said I wanted installed. (Nix/Guix probably do this the most nicely though.)

      • /etc/ is similarly "immutable but derived" in the best case. I should be able to give the same system configuration inputs and always get the same system of packages and configuration files.

    • I like Guix/Nix, but my usage of Debian and Fedora and friends is not going away anytime soon. Nonetheless, configuration management systems like puppet/ansible/salt help give the illusion of an immutable system derived from a set of inputs, even though they are working within a mutable one.

    • Language packaging for deployment needs to die. Yes, I say this as a project that advocates that very route. We're doing it wrong, and I want to change it. (Language packaging for development though: that's great!)

    • Asking people to use systems like ansible/salt/puppet is asking users too much. You're just asking them to learn one more layer on top of knowing how the whole system works. Sharing common code is mostly copy and paste. There are some layers built on top of here to mitigate this but afaict they aren't really good, not good enough. (I am working on something to solve this...)

    • Pre-built containers are not the solution. Sorry container people! Containers can be really useful but only if they are built in some reproducible way. But very few people using Docker and etc seem to be doing this. But here's another thing: Docker and friends contain their own deployment domain specific languages, which is dumb. If a reproducible configuration system is good enough, it should be good enough for a VM or a container or a vanilla server or a desktop. So maybe we can use containers as lightweight and even sandboxed VMs, but we shouldn't be installing prebuilt containers on our servers alone as a system.

      Otherwise else you're running 80 heavy and expensive Docker images that slowly go out of date... now you're not maintaining 1 distribution install, you're maintaining 81 of them. Yikes! Good luck with the next Shellshock!

    • Before Asheesh jumps in here: yes I will say that Sandstorm is taking maybe the best route as in terms of a system that uses containers heavily (and unlike Docker, they seem actually sandboxed) in that it seems to have a separation between mutable parts and immutable parts: the container is more or less an immutable machine from what I can tell that has /var/ mounted into it, which is a pretty good route.

      In this sense I think Sandstorm has a good picture of things. There are other things that I am still very unsure about, and Asheesh knows because I have expressed them to him (I sure hope that iframe thing goes away, and that daemons like Celery can run, and etc!) but at least in this sense, Sandstorm's container story is more sane.

    So there are some reflections in case you are planning on debugging why these things are hard.

    -- Chris

    PS: If you haven't gotten the sense, the direction I'm thinking of is more along the lines of Guix becoming our Glorious Future (TM) assuming something like GuixOps can happen (go Dave Thompson, go Guix crew!) and a web UI can be built on top of it with some sort of common recipe system.

    But I don't think our imperative systems like Debian are going away anytime soon; I certainly don't intend to move all my stuff over to Guix at this time. For that reason, I think there needs to be another program to fit the middle ground: something like salt/ansible/puppet, but with less insane one-off domain specific languages, with a sharable recipe system, and scalable both from developer-oriented scripts but also having a user-friendly web interface. I've begun working on this tool, and it's called Opstimal. Expect to hear more about it soon.

    Interviewed on Ryno the Bearded

    By Christine Lemmer-Webber on Tue 07 April 2015

    I was interviewed on Ryno the Bearded in an episode with the curious title "My Origin Story". I'm not sure whose that refers to, maybe both of us, because we both talked about our backstories. (Though, I think if I was going to lay out my "free software origin story", it would probably include some other things... but maybe it would get to be fairly rambly. I've thought about trying to write up what that is before, but I guess there were a lot of "moments" for my free software origins, not any one moment like "I was bitten by a libre radioactive spider".)

    I really enjoyed doing this one, and maybe you'll enjoy listening to it. It's kind of rambly and conversational and we came in with very little as in terms of questions, but I think I tend to do fairly well in that format.

    I did cut off the end of the interview by saying I had to go to the bathroom though. Not really the most dignified of exits on my part. Oh well.

    Fever Dream: cryptocurrency

    By Christine Lemmer-Webber on Sun 29 March 2015

    I've been sick the last two days, and for some reason, I've been dreaming in lisp. It's not the first time I've dreamt in code, but all times have been in lisp. Maybe lisp is not as readable as many would like, but it seems dreamable.

    Just got up from another nap, another lisp-based fever dream. This one about a cryptocurrencty on an actor model system, built for a MUD/AR type environment... no blockchain required (note: may be inspired by reading Rainbows End between naps):

    • Various actors represent "reserves", like the Federal Reserve can print their own money, as much as they like, but there may be various community enforcements of this
    • You might have different central banks / reserves on different servers
    • The "value" or exchange rate determined by the market, like international currency.
    • Currency is non-divisible (you have 100 rupees, but not .1 rupees)
    • Bank has a private key, so does each actor.
    • Basically more or less passing along capabilities (I think???) but maybe even the central bank signing whoever has "posession". The bank does not know who things are transferred to, but does verify that the transferring owner has the right capability currently belonging to that unit's ID, and does issue a new capability to the new owner.

    Does this make sense? I'm too sick to know for now. But transactions flying everywhere, wrapped in parentheses, in my fevered mind.

    If I have more crazy lisp fever dreams today I will record them here. No guarantee of sanity.