Archives

Tags

Posts with tag "scheme"

Hash tables are easy (in Guile)

By Christopher Allan Webber on Mon 09 November 2015

As a programmer, I use hash tables of varying kinds pretty much all day, every day. But one of the odd and embarrassing parts of being a community-trained programmer is that I've never actually implemented one. Eek! Well, today I pulled an algorithms book off the shelf and decided to see how long it would take me to implement their simplest example in Guile. It turns out that it takes less than 25 lines of code to implement a basic hash table with O(1) best time, O(1) average time, and O(n) worst case time. The worst case won't be too common depending on how we size things so this isn't so bad, but we'll get into that as we go along.

Here's the code:

;;; Simple hash table implementation -- (C) 2015 Christopher Allan Webber
;;; Released under the "Any Free License 2015-11-05", whose terms are the following:
;;;   This code is released under any of the free software licenses listed on
;;;     https://www.gnu.org/licenses/license-list.html
;;;   which for archival purposes is
;;;     https://web.archive.org/web/20151105070140/http://www.gnu.org/licenses/license-list.html

(use-modules (srfi srfi-1))

(define (make-dumbhash size)
  "Make a dumb hash table: an array of buckets"
  (make-array '() size))

(define* (dumbhash-ref dumbhash key #:optional (default #f))
  "Pull a value out of a dumbhash"
  (let* ((hashed-key (hash key (array-length dumbhash)))
         (bucket (array-ref dumbhash hashed-key)))
    (or (find (lambda (x) (equal? (car x) key))
              bucket)
        default)))

(define (dumbhash-set! dumbhash key val)
  "Set a value in a dumbhash"
  (let* ((hashed-key (hash key (array-length dumbhash)))
         (bucket (array-ref dumbhash hashed-key)))
    ;; Only act if it's not already a member
    (if (not (find (lambda (x) (equal? (car x) key))
                   bucket))
        (array-set! dumbhash
                    ;; extend the bucket with the key-val pair
                    (cons (cons key val) bucket)
                    hashed-key))))

You might even notice that some of these lines are shared between dumbhash-ref and dumbhash-set!, so this could be even shorter. As-is, sans comments and docstrings, it's a mere 17 lines. That's nothing.

We also cheated a little: we're using hash and equal? to generate a hash and to test for equality, which are arguably the hard parts of the job. But these are provided by Guile, and it's one less thing to worry about. Here's a brief demonstration though:

(equal? 'a 'a)               ;; => #t, or true
(equal? 'a 'b)               ;; => #f, or false
(equal? "same" "same")       ;; => #t
(equal? "same" "different")  ;; => #f
(hash "foo" 10)              ;; => 6
(hash 'bar 10)               ;; => 5

equal? is self-explanatory. The important thing to know about hash is that it'll pick a hash value for a key (the first parameter) for a hash table of some size (the second parameter).

So let's jump into an example. make-dumbhash is pretty simple. It just creates an array of whatever size we pass into it. Let's make a simple hash now:

scheme@(guile-user)> (define our-hash (make-dumbhash 8))
scheme@(guile-user)> our-hash
$39 = #(() () () () () () () ())

This literally made an array of 8 items which easy start out with the empty list as its value (that's nil for you common lispers). (You can ignore the $39 part, which may be different when you try this; Guile's REPL lets you refer to previous results at your prompt by number for fast & experimental hacking.)

So our implementation of hash tables is of fixed size, which doesn't limit the number of items we put into it, since buckets can contain multiple values in case of collision (and collisions tend to happen a lot in hash tables, and we come prepared for that), but this does mean we have an existing guess of about how many buckets we need for efficiency. (Resizing hash tables is left as an exercise for the reader.) Our hash table also uses simple linked lists for its buckets, which isn't too uncommon as it turns out.

Let's put something in the hash table. Animal noises are fun, so:

scheme@(guile-user)> (dumbhash-set! our-hash 'monkey 'ooh-ooh)
scheme@(guile-user)> our-hash
$40 = #(() () () ((monkey . ooh-ooh)) () () () ())

The monkey was appended to the third bucket. This makes sense, because the hash of monkey for size 8 is 3:

scheme@(guile-user)> (hash 'monkey 8)
$41 = 3

We can get back the monkey:

scheme@(guile-user)> (dumbhash-ref our-hash 'monkey)
$42 = (monkey . ooh-ooh)

We've set this up so that it returns a pair when we get a result, but if we try to access something that's not there, we get #f instead of a pair, unless we set a default value:

scheme@(guile-user)> (dumbhash-ref our-hash 'chameleon)
$43 = #f
scheme@(guile-user)> (dumbhash-ref our-hash 'chameleon 'not-here-yo)
$44 = not-here-yo

So let's try adding some more things to our-hash:

scheme@(guile-user)> (dumbhash-set! our-hash 'cat 'meow)
scheme@(guile-user)> (dumbhash-set! our-hash 'dog 'woof)
scheme@(guile-user)> (dumbhash-set! our-hash 'rat 'squeak)
scheme@(guile-user)> (dumbhash-set! our-hash 'horse 'neigh)
scheme@(guile-user)> ,pp our-hash
$45 = #(()
        ((horse . neigh))
        ()
        ((rat . squeak) (monkey . ooh-ooh))
        ((cat . meow))
        ()
        ((dog . woof))
        ())

(,pp is a shortcut to pretty-print something at the REPL, and I've taken the liberty of doing some extra alignment of its output for clarity.)

So we can see we have a collision in here, but it's no problem. Both rat and monkey are in the same bucket, but when we do a lookup of a hashtable in our implementation, we get a list back, and we search to see if that's in there.

We can figure out why this is O(1) average / best time, but O(n) worst time. Assume we made a hash table of the same size as the number of items we put in... assuming our hash procedure gives pretty good distribution, most of these things will end up in an empty bucket, and if they end up colliding with another item (as the rat and monkey did), no big deal, they're in a list. Even though linked lists are of O(n) complexity to traverse, assuming a properly sized hash table, most buckets don't contain any or many items. There's no guarantee of this though... it's entirely possible that we could have a table where all the entries end up in the same bucket. Luckily, given a reasonably sized hash table, this is unlikely. Of course, if we ended up making a hash table that started out with 8 buckets, and then we added 88 entries... collisions are guaranteed in that case. But I already said resizing hash tables is an exercise for the reader. :)

If you're familiar enough with any Scheme (or probably any other Lisp), reading dumbhash-ref and dumbhash-set! should be pretty self-explanatory. If not, go read an introductory Scheme tutorial, and come back! (Relatedly, I think there aren't many good introductory Guile tutorials... I have some ideas though!

What lessons are there to be learned from this post? One might be that Guile is a pretty dang nice hacking environment, which is true! Another might be that it's amazing how far I've gotten in my career without ever writing a hash table, which is also true! But the lesson I'd actually like to convey is: most of these topics are not as unapproachable as they seem. I had a long-time fear that I would never understand such code until I took the time to actually sit down and attempt to write it.

As an additional exercise for the reader, here's a puzzle: is the Any Free License this code released under actually a free license? And what commentary, if any, might the author be making? :)

A conversation with Sussman on AI and asynchronous programming

By Christopher Allan Webber on Wed 14 October 2015

Sussman!

A couple weeks ago I made it to the FSF's 30th anniversary party. It was a blast in many ways, and a good generator of many fond memories, but I won't go into depth of them here. One particularly exciting thing that happened for me though was I got to meet Gerald Sussman (of SICP!) The conversation has greatly impacted me, and I've been spinning it over and over again in my mind over the last few weeks... so I wanted to capture as much of it here while I still could. There are things Sussman said that I think are significant... especially in the ways he thinks contemporary AI is going about things wrong, and a better path forward. So here's an attempt to write it all down... forgive me that I didn't have a real tape recorder, so I've written some of this in a conversational style as I remember it, but of course these are not the precise things said. Anyway!

I wasn't sure initially if the person I was looking at was Gerald Sussman or not, but then I noticed that he was wearing the same "Nerd Pride" labeled pocket protector I had seen him wear in a lecture I had watched recently. When I first introduced myself, I said, are you Sussman? (His first reply was something like to look astonished and say, "am I known?") I explained that I've been reading the Structure and Interpretation of Computer Programs and that I'm a big fan of his work. He grinned and said, "Good, I hope you're enjoying it... and the jokes! There's a lot of jokes in there. Are you reading the footnotes? I spent a lot of time on those footnotes!" (And this point my friend David Thompson joined me, and they both chuckled about some joke about object oriented programmers in some footnote I either hadn't gotten to or simply hadn't gotten.)

He also started to talk enthusiastically about his other book, the Structure and Interpretation of Classical Mechanics, in which classical engineering problems and electrical circuits are simply modeled as computer programs. He expressed something similar to what he had said in the forementioned talk, that conventional mathematical notation is unhelpful, and that we ought to be able to express things more clearly as programs. I agreed that I find conventional mathematical notation unhelpful; when I try to read papers there are concepts I easily understand as code but I can't parse the symbolic math of. "There's too much operator overloading", I said, "and that makes it hard for me to understand in which way a symbol is being used, and papers never seem to clarify." Sussman replied, "And it's not only the operator overloading! What's that 'x' doing there! That's why we put 'let' in Scheme!" Do you still get to write much code or Scheme these days, I asked? "Yes, I write tens of thousands of lines of Scheme per year!" he replied.

I mentioned that I work on distributed systems and federation, and that I had seen that he was working on something that was called the propagator model, which I understood was some way of going about asynchronous programming, and maybe was an alternative to the actor model? "Yes, you should read the paper!" Sussman replied. "Did you read the paper? It's fun! Or it should be. If you're not having fun reading it, then we wrote it wrong!" (Here is the paper, as well as the documentation/report on the software... see for yourself!) I explained that I was interested in code that can span multiple processes or machines, are there any restrictions on that in the propagator model? "No, why would there be? Your brain, it's just a bunch of hunks of independent grey stuff sending signals to each other."

At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court." (I know, that definitely sounds like out-there science fiction, bear with me... keeping that frame of mind is useful for the rest of this.)

"Oh! This is very interesting to me, I've been talking with some friends about how AI systems and generative software may play in with software freedom, and if our traditional methods of considering free software still applied in that form," I said. I mentioned a friend of a friend who is working on software that is generated via genetic programming, and how he makes claims that eventually that you won't be looking at code anymore, that it'll be generating this black box of stuff that's running all our computers.

Sussman seemed to disagree with that view of things. "Software freedom is a requirement for the system I'm talking about!" I liked hearing this, but didn't understand fully what he meant... was he talking about the foundations on top of which the AI software ran?

Anyway, this all sounded interesting, but it also sounded very abstract. Is there any way this could be made more concrete? So I asked him, if he had a student who was totally convinced by this argument, that wanted to start working on this, where would you recommend he start his research? "Read the propagators paper!" Sussman said.

OH! Prior to this moment, I thought we were having two separate conversations, one about asynchronous programming, and one about AI research. Suddenly it was clear... Sussman saw these as interlinked, and that's what the propagator system is all about!

One of the other people who were then standing in the circle said, "Wait a minute, I saw that lecture you gave recently, the one called 'We Don't Really Know How to Compute!', and you talked about the illusion of seeing the triangle when there wasn't the triangle" (watch the video) "and what do you mean, that you can get to that point, and it won't be a black box? How could it not be a black box?"

"How am I talking to you right now?" Sussman asked. Sussman seemed to be talking about the shared symbolic values being held in the conversation, and at this point I started to understand. "Sure, when you're running the program, that whole thing is a black box. So is your brain. But you can explain to me the reasoning of why you did something. At that point, being able to inspect the symbolic reasoning of the system is all you have." And, Sussman explained, the propagator model carries its symbolic reasoning along with it.

A more contrived relation to this in real life that I've been thinking about: if a child knocks over a vase, you might be angry at them, and they might have done the wrong thing. But why did they do it? If a child can explain to you that they knew you were afraid of insects, and swung at a fly going by, that can help you debug that social circumstance so you and the child can work together towards better behavior in the future.

So now, hearing the above, you might start to wonder if everything Sussman is talking about means needing a big complicated suite of natural language processing tools, etc. Well, after this conversation, I got very interested in the propagator model, and to me at least, it's starting to make a lot of sense... or at least seems to. Cells' values are propagated from the results of other cells, but they also carry the metadata of how they achieved that result.

I recommend that you read the materials yourself if this is starting to catch your interest. (A repeat: here is the paper, as well as the documentation/report on the software... see for yourself!). But I will highlight one part that may help drive the above points more clearly.

The best way to catch up on this is to watch the video of Sussman talking about this while keeping the slides handy. The whole thing is worth watching, but about halfway through he starts talking about propagators, and then he gets to an example of where you're trying to measure the height of a building by a variety of factors, and you have these relationships set up where as information is filled in my a cell's dependencies, that cell merges what it already knows about the cell with what it just learned. In that way, you might use multiple measurements to "narrow down" the information. Again, watch the video, but the key part that comes out of the demonstration is this:

(content fall-time)
=> #(contingent #(interval 3.0255 3.0322)
                (shadow super))

What's so special about this? Well, the fall-time has been updated to a more narrow interval... but that last part (shadow and super) are the symbols of the other cells which propagated the information of this updated state. Pretty cool! And no fancy natural language parsing involved.

There's certainly more to be extrapolated from that, and more to explore (the Truth Maintenance Systems are particularly something interesting to me). But here were some interesting takeaways from that conversation, things I've been thinking over since:

  • AI should be "accountable", in the sense that it should be able to express its symbolic reasoning, and be held up to whether or not its assumptions held up to that.
  • Look more into the propagators model... it's like asynchronous programming meets functional programming meets neural nets meets a bunch of other interesting AI ideas that have been, like so many things, dropped on the floor for the last few decades from the AI winter, and which people are only now realizing they should be looking at again.
  • On that note, there's so much computing history to look at, so many people are focused on looking at what the "new hotness" is in web development or deployment or whatever. But sometimes looking backwards can help us better look forwards. There are ideas in SICP that people are acting as if they just discovered today. (That said, the early expressions of these ideas are not always the best, and so the past should be a source of inspiration, but we should be careful not to get stuck there.)
  • Traditional mathematical notation and electrical engineering diagrams might not convey clearly their meaning, and maybe we can do better. SICM seems to be an entire exploration of this idea.
  • Free software advocates have long known that if you can't inspect a system, you're held prisoner by it. Yet this applies not just to the layers that programmers currently code on, but also into new and more abstract frontiers. A black box that you can't ask to explain itself is a dangerous and probably poorly operating device or system.
  • And for a fluffier conclusion: "If you didn't have fun, we were doing it wrong." There's fun to be had in all these things, and don't lose sight of that.

Wisp: Lisp, minus the parentheses

By Christopher Allan Webber on Wed 23 September 2015

Arne Babenhauserheide has built a really cool syntax alternative for Scheme, Wisp (not to be confused with a different lisp-related-wisp), or in standards version, SRFI 119. It looks pretty nice:

;; hello world example
display                             ;    (display
  string-append "Hello " "World!"   ;      (string-append "Hello " "World!"))
display "Hello Again!"              ;    (display "Hello Again!")

;; hello world function
define : hello who                  ;    (define (hello who)
  display                           ;      (display 
    string-append "Hello " who "!"  ;        (string-append "Hello " who "!")))

Actually, let's see that in emacs, just to be sure.

Wisp and hello world

How about something slightly more substantial? How about a real life Guix package for GNU Grep:

Wisp, Emacs, Guix and Grep

Wow, not bad... not bad at all! I'd say that's quite readable! (Too bad the lines don't line up exactly in that screenshot; that's not the code but rather my emacs theme bolding the wisp code.)

What's nice is that unlike most s-expression alternatives, it doesn't lack any of the power of Lisp; it's "just lisp" with the parentheses hidden by vaguely pythonesque indentation, which means even macros work.

Now me personally? I've learned to love the parens, and there's nothing that beats an editor that knows how to do cool structural s-expression editing and navigation. But I admit that learning to read through all the parentheses was a tough thing for me initally, and certainly for many others. Maybe this can help boil the lisp frog for some.

Now what would really be hylarious would be to port this to Hy...

Fauxnads

By Christopher Allan Webber on Sat 14 March 2015

So off and on, I've been trying to understand monads. It turns out I have a use case: making web applications in the style I'd like but have them be asynchronous leads to trouble because you need a non-global-variable way of passing along context. I've tried thinking of some solutions, but a friend of mine convinced me that monads, if I could wrap my head around them, would solve the problem for me.

With that in mind, I did some reading... there are plenty of resources out there, some of them with nice pictures, and at one point I tried to understand them by just reading about them and not writing any code relevant to them. This lead to me guessing how they might work by trying to contextualize them to the problems I wanted to solve. So along that path, I had a misunderstanding, a mistaken vision of how monads might work, but while I was wrong, it turned out that this vision of ~monads is kind of fun and had some interesting properties, so I decided to code it up into a paradigm I'm jokingly calling "fauxnads".

Brace yourself, we're about to get into code, pseudo- and real, and it's going to be in Guile. (All code in this blogpost, GPLv3+ or LGPLv3+, as published by the FSF!)

So here's the rundown of fauxnads:

  • They still pass along a context, and under the hood, they do pass it in still as the first argument to a function!
  • The context gets passed up (and optionally, down... more on that in a few) in some kind of associative array... but we don't want to accidentally change the context that we passed to other functions already, so we'll use something immutable for that.
  • The user doesn't really access the context directly. They specify what variables they want out of it, and the fauxnad macro extracts it for them.
  • Fauxnads can add properties to the context that they'll call subroutines with so that subsequent fauxnad calls can have access to those.
  • Calling child fauxnads happens via invoking a function (=>) exposed to the rest of the fauxnad via some lexical scope hacks.

So when sketching this out, I tried to boil down the idea to a quick demo:

;; fleshed out version of what a fauxnad should approx expand to
(define (context-test1 context arg1 arg2)
  (letrec ((new-context context)
           ;; Define function for manipulating the context
           (context-assoc
            (lambda (key value)
              (set! new-context
                    (vhash-cons key value new-context))))
           ;; a kind of apply function
           (=>
            (lambda (func . args)
              (apply func (cons new-context args))))) ;; should be gensym'ed

    ;; This part would be filled in by the macro.
    ;; The user would set which variables they want from the context
    ;; as well as possibly the default values
    (let ((a (or (vhash-assoc 'a context) "default-value for a"))
          (b (or (vhash-assoc 'b context) "default-value for b"))
          (c (or (vhash-assoc 'c context) "default-value for c")))
      (values
       (begin
         (context-assoc 'lol "cats")
         (=> context-test2 "sup cat")
         (context-assoc 'a "new a")
         (=> context-test2 "sup cat")
         (format #t "a is ~s, b is ~s, and c is ~s\n"
                 a b c)
         (string-append arg1 " " arg2))
       new-context))))

;; intentionally simpler, not a "real fauxnad", to demo
;; the fauxnad concept at its most minimal
(define (context-test2 context arg1)
  (begin
    (format #t "Got ~s from the context, and ~s from args, mwahaha!\n"
            (vhash-assoc 'a context)
            arg1))
  (values
   "yeahhh"
   context))

Then calling in the console:

scheme@(guile-user)> (context-test1 (alist->vhash '((a . 1) (b . 2))) "passed arg1" "passed arg2")
Got (a . 1) from the context, and "sup cat" from args, mwahaha!
Got (a . "new a") from the context, and "sup cat" from args, mwahaha!
a is (a . 1), b is (b . 2), and c is "default-value for c"
$79 = "passed arg1 passed arg2"
$80 = #<vhash 2205920 4 pairs>

Okay, whaaa? Let's look at the requirement again. We'll be passing in a function to the start of the function, and then having some other args. We'll then pass that along to subsequent functions. So more or less, we know that looks like this. (I know, not the most useful or pretty or functional code, but it's just a demo of the problem!)

(define (main-request-handler context request)
  ;; print out hello world in the right language
  (display (translate-this (context-get context 'lang) "Hello, world"))
  (newline)

  ;; now call another function
  (next-function new-context (smorgify arg1)))

(define (next-function context what-to-smorgify)
  (write-to-db
   ;; Lots of functions need access to the
   ;; current database connection, so we keep it in the context...
   (context-get context 'db-conn)
   (smorgify-this what-to-smorgify)))

But wouldn't it be cool if we didn't have to pass around the context? And what if we just said, "we want this and this from the context", then forgot about the rest of the context? We'd never need to call context-get again! It would also be cool to have a way to set things in the context for subsequent calls. Ooh, and if we coud avoid having to type "context" over and over again when passing it into functions, that would also be awesome.

So how about a syntax like this:

(define-fauxnad (our-special-function arg1)
  ((lang "en"))  ;; we want the language variable, but if not set,
                 ;; default to "en" or english
  ;; (body is below:)
  ;; print out hello world in the right language
  (display (translate-this lang "Hello, world"))
  (newline)
  ;; now call another function
  (=> next-function (smorgify arg1)))

We also know we want to use some sort of immutable hashmap. Guile provides vhashes which provide "typically constant-time" data access, and while there are some caveats (a new vhash returned by appending a key/value pair where that key already existed in the vhash will just keep the old pair around... but on our pseudo-stack that shouldn't happen very often, so vhashes should be fine), they work for our purposes.

Okay, cool. So what would that look like, expanded? Something along the lines of:

(define (our-special-function context arg1)
  (letrec ((new-context context)
           ;; Define function for manipulating the context
           (context-assoc
            (lambda (key value)
              (set! new-context
                    (vhash-cons key value new-context))))
           ;; a kind of apply function
           (=>
            (lambda (func . args)
              (apply func (cons new-context args))))) ;; should be gensym'ed

    ;; This part would be filled in by the macro.
    ;; The user would set which variables they want from the context
    ;; as well as possibly the default values
    (let ((lang (or (vhash-assoc 'lang context) "en")))
      (values
       (begin
         ;; print out hello world in the right language
         (display (translate-this (context-get context 'lang) "Hello, world"))
         (newline)

         ;; now call another function
         (next-function new-context (smorgify arg1)))
       new-context))))

As a bonus, we've taken advantage of Guile's multi-value return support, so any parent function which cares can get back the new context we defined for subsequent calls, in case we want to merge contexts or something. But functions not aware of this will simply ignore the second returned parameter. (I'm not sure this is a useful feature or not, but it's nice that Guile makes it easy to implement!)

That's clearly quite a complicated thing to implement manually though... so it's time to write some code to write code. That's right, it's macro time! Guile has some pretty cool hygienic macro support that uses "syntax tranformation"... a bit nicer than common lisp's defmacro, but also less low-level. Anwyay, if you're not familiar with that syntax, trust me that this does the right thing I guess:

(define-syntax define-fauxnad
  (lambda (x)
    (syntax-case x ()
      ((_ (func-name . args)
          ((context-key context-default) ...)
          body ...)
       (with-syntax ((=> (datum->syntax x '=>))
                     (context-assoc (datum->syntax x 'context-assoc)))
         #'(define (func-name context . args)
             (letrec ((new-context context)
                      ;; Define function for manipulating the context
                      (context-assoc
                       (lambda (key value)
                         (set! new-context
                               (vhash-cons key value new-context))))
                      ;; a kind of apply function
                      (=>
                       (lambda (func . func-args)
                         (apply func (cons new-context func-args))))) ;; should be gensym'ed

               ;; This part would be filled in by the macro.
               ;; The user would set which variables they want from the context
               ;; as well as possibly the default values
               (let ((context-key (or (vhash-assoc (quote context-key) context)
                                      context-default))
                     ...)
                 (values
                  (begin
                    body ...)
                  new-context)))))))))

Nice, now writing our fauxnads is dead-simple:

(define-fauxnad (our-special-function arg1)
  ((lang "en"))  ;; we want the language variable, but if not set,
  ;; default to "en" or english
  ;; (body is below:)
  ;; print out hello world in the right language
  (display (translate-this lang "Hello, world"))
  (newline)
  ;; now call another function
  (=> next-function (smorgify arg1)))

(define-fauxnad (next-function context what-to-smorgify)
  ((db-conn #nil))
  (write-to-db
   ;; Lots of functions need access to the
   ;; current database connection, so we keep it in the context...
   (context-get context 'db-conn)
   (smorgify-this what-to-smorgify)))

Okay, again, my demos don't make this look very appealing I suppose. We can now transform the original demos I sketched up into fauxnads though:

(define-fauxnad (context-test1 arg1 arg2)
  ((a "default value for a")
   (b "default value for b")
   (c "default value for c"))
  (context-assoc 'lol "cats")
  (=> context-test2 "sup cat")
  (context-assoc 'a "new a")
  (=> context-test2 "sup cat")
  (format #t "a is ~s, b is ~s, and c is ~s\n"
          a b c)
  (string-append arg1 " " arg2))

(define-fauxnad (context-test2 arg1)
  ((a #nil))
  (format #t "Got ~s from the context, and ~s from args, mwahaha!\n"
          a arg1)
  "yeahhh")

And calling it:

scheme@(guile-user)> (context-test1 (alist->vhash '((a . 1) (b . 2))) "passed arg1" "passed arg2")
Got (a . 1) from the context, and "sup cat" from args, mwahaha!
Got (a . "new a") from the context, and "sup cat" from args, mwahaha!
a is (a . 1), b is (b . 2), and c is "default value for c"
$81 = "passed arg1 passed arg2"
$82 = #<vhash 2090ae0 4 pairs>

Okay, so what's the point? I doubt this blogpost really would sell anyone on fauxnads, and maybe why would you use fauxnads when you can use real monads? But here's some interesting properties:

  • fauxnads are still super simple functions that you can call manually: just pass in the context (a vlist) as the first parameter.
  • the "binding" function for calling sub-fauxnads is sugar, but hidden (and otherwise inaccessible, because the function hygeine keeps you from accessing "new-context") from the user.
  • I still like that you can get back the new context via multiple value return, but totally ignore it if you don't care about it.
  • I understand how they work.

And on that last note, I still don't understand monads, but I feel like I'm getting closer to it. It was fun to document, and put to code, a misunderstanding though!