Remember those Real Men of Genius Bud Light commercials on the radio? You know, the ones that salute unsung heroes like “Mr Hawaiian Shirt Pattern Designer,” “Mr Beach Metal Detector Guy,” and “Mr Male Fur Coat Wearer” with David Bickler singing rock ballad style in the background?

If they did one of these for Mads Torgersen, it would have to be:

“Mr Creates Recursive Lambda Expressions in C# via Y-Combinator Generated Fixed Point Functions Guy.”

And it wouldn’t be mocking anything because seriously Mads, you are my freakin’ hero.

I don’t come from a deep academic CS background rich with experience in the Lisp and ML families, so the functional paradigm has been something I’ve slowly grown into. Right now I’m working my way through Expert F# (by Don Syme [F# creator] and others) and F# for Scientists while trying to reinforce my understanding of each topic by devising comparable implementations of each example (where possible) in C#, ruby, or some other familiar imperative curly braced language.

For instance, right now I’m looking at the subjects of precomputation and memoization. You can think of memoization as an optimization technique that involves caching the results of each call to a given function to avoid the hit of continual recalculation. In other words, a function remembers its own results for previously applied inputs. So if there is some expensive computation needed to arrive at an answer, you hang on to the result (and the input) for future reference so if someone calls the function again with the same input you can turn around and give the result without all the hairy computation. This is primarily applicable to recursive functions which happen to be very common in functional programming.

For example, consider Fibonacci:

`let`

`rec`

`fib n =`

`match n with`

`|`

`1`

`|`

`2`

`->`

`1 | _ -> fib(n-1) + fib(n-2)`

The Fibonacci implementation is very straight forward and really simple with recursion, thus its become sort of the functional hello world. Something you’ll notice is that running this for large numbers is heinously slow. The reason for this is that through the recursion the same calls are occurring over and over again. Look how much redundancy you get even for fib 7.

If we wanted to quantify how many calls are going on for any given n we could wire up our fib function like so:

```
let fib n =
let calls = new Dictionary<_,_>()
let rec trackedFib n =
if calls.ContainsKey(n) then calls.[n] <- calls.[n] + 1
else calls.Add(n,1)
match n with
| 1 | 2 -> 1
| _ -> trackedFib(n-1) + trackedFib(n-2)
(trackedFib n, calls)
```

All this is doing is embedding a dictionary that keeps track of how many times fib is called with each input. The function returns a tuple of the actual calculation result as well as the dictionary that tracked each call. So lets see what this yields for an input of 10.

` let (res, calls) = fib 10`

And if we examine this in F# interactive we get:

```
> calls;;
val it : Dictionary
= dict
[(10, 1); (9, 1); (8, 2); (7, 3); (6, 5); (5, 8); (4, 13); (3, 21);
(2, 34); (1, 21)]
```

Hey wait a minute. Look at that output again.

` [(10, `**1**); (9, **1**); (8, **2**); (7, **3**); (6, **5**); (5, **8**); (4, **13**); (3, **21**);
(2, **34**); (1, 21)]

No THAT isn’t what I expected, but yeah it makes sense that the number of calls would follow the Fibonacci sequence given we are recursively counting it. The weird value for the 1 case occurs because if we hit 2, we terminate without proceeding to 1. (look at the tree for fib 7 again) Had each 2 continued to a 1 we would have 34 more 1′s which would give the expected 55 total calls.

Ok, so the standard recursive implementation is really slow because of all the repeated calls and will get slower and slower as input gets bigger. So if we update the fib implementation to remember the values of each call, we can short circuit most of the recurring chains.

```
let mem_fib n =
let cache = new Dictionary<_,_>()
let rec cachedFib n =
match n with
| 1 | 2 -> 1
| _ -> if cache.ContainsKey(n) then cache.[n]
else let result = cachedFib(n-1) + cachedFib(n-2)
cache.Add(n,result)
result
cachedFib n
```

Notice how the dictionary is now caching the results of the call instead of the number of times called. We can test this and see we get the right answer.

```
> mem_fib 10;;
val it : int = 55
```

Good. And to demonstrate that its way faster you can wire the memoized version with call tracking:

```
let fast_fib n =
let calls = new Dictionary<_,_>()
let cache = new Dictionary<_,_>()
let rec cachedTrackedFib n =
if calls.ContainsKey(n) then calls.[n] <- calls.[n] + 1
else calls.Add(n,1)
match n with
| 1 | 2 -> 1
| _ -> if cache.ContainsKey(n) then cache.[n]
else let result = cachedTrackedFib(n-1)
+ cachedTrackedFib(n-2)
cache.[n] <- result
result
(cachedTrackedFib n, calls)
```

Run for fib 10:

` let (mem_res, mem_calls) = fast_fib 10`

And see that the call graph is far smaller:

```
> mem_calls;;
val it : Dictionary
= dict
[(10, 1); (9, 1); (8, 2); (7, 2); (6, 2); (5, 2); (4, 2); (3, 2);
(2, 2); (1, 1)]
```

Nice. We went from 109 calls down to 17.

My imperative roots are beginning to show in that last example, so lets make the result a little more functional. First of all the standard generic memoize function looks like this:

`let memoize f = let cache = new Dictionary<_,_>() fun n`

`->`

`if cache.ContainsKey(n) then cache.[n] else let result = f n cache.Add(n,result) result`

and then we can add a generic tracking function:

`let calls =`

new Dictionary<int,int>()`let trackCalls f =`

`fun n`

`->`

`if`

`calls`

`.ContainsKey(n) then calls.[n] <- calls[n] + 1 else calls.Add(n,1) f n`

and finally the fib impl:

`let`

`rec`

memoizedFib`=`

`(`

`fun n`

`->`

`match n with`

`|`

`1`

`|`

`2`

`->`

`1 | _ ->`

memoizedFib`(n-1) +`

memoizedFib`(n-2)`

) |> memoize |> trackCalls

Much better. And we reverify:

```
> memoizedFib 10;;
val it : int = 55
> calls;;
val it : Dictionary
= dict
[(10, 1); (9, 1); (8, 2); (7, 2); (6, 2); (5, 2); (4, 2); (3, 2);
(2, 2); (1, 1)]
```

Looks like it works to me. Ok, so how do we implement this in C#? Well, my first inclination was to write it up as a standard lambda expression with the same look and feel.

```
Func<int, int> fib = n => n <= 2 ? 1
: fib(n - 1) + fib(n - 2);
```

But this comes back with an error reading: *Use of unassigned local variable ‘fib’.* What? Use of unassigned local variable? I thought that I was **declaring **fib with that expression, right? Well yes and no. C# lambda expression syntax is really just syntactic sugar for a boatload of expression tree code that implements the anonymous method. That code needs to reference fib to call it to perform the recursion, but fib hasn’t been fully defined yet since the expression tree itself is what defines fib. That implies that at the end of the day, the current C# implementation as of 3.0 does not support pure recursive lambda expressions. Crap.

It turns out there is, of course a workaround where you can define your lambda ahead of time and refer back to it to get a pseudorecursive result.

```
Func<int, int> fib = null;
fib = n => n <= 2 ? 1
: fib(n - 1) + fib(n - 2);
```

This is good enough for many applications but isn’t pure and requires two expressions to implement, which is problematic for use in fluent interfaces like linq.

So I started searching around to find out what the deal with this was. That’s when I stumbled over an ancient post (2007) by Mads Torgersen. This is a gem. Its freakin’ awesome. I mean seriously, go read it. Right now.

Here’s the synopsis

- C# can’t implement a recursive definition of factorial as a lambda
- A fixed point for a function
*f*is a value*x*for which*f(x) = x* - Its hard to find a fixed point for a function, but finding fixed points for
*functions of functions*is a solved problem - If we define our factorial as a function of a function, its fixed point would be the implementation of factorial itself (which is what we want)
- We can define
*omega*as a function of a function that self applies. This is recursive, but endless since it doesn’t describe input or a base case. - We can leverage the knowledge of our Lambda Calculus forefathers and use the Y-Combinator to create a self applying function that also applies another input function. In this context, we get a fixed point generator.
- Apply our factorial function to the fixed point finder and we get a true recursive lambda in C#.

It is granted that the C# generics syntax is really, really noisy and distracting, but it helps to make you appreciate Hindley-Milner languages all the more. Mads’ walkthrough is suprisingly clear for such potentially opaque subject matter (to newbies like me at least). And well, its just really, really interesting.

If you didn’t actually read it, really, go do so.

Mads you are a giant among men. Here’s to you, Mr Creates Recursive Lambda Expressions in C# via Y-Combinator Generated Fixed Point Functions Guy.

on April 10, 2009 at 3:25 pmJohn HaugelandDude, the fixed point combinator was created by Church in the 1940s, before the existence of any programming languages. This comes from math, not comp sci, and _definately_ not from Mads Torgersen in 2007.

on April 10, 2009 at 4:02 pmJason KikelHi John,

I didn’t mean to imply that Mads invented the fixed point combinator. My intent was to celebrate a really good explanation, presentation, and application of the subject matter. Thanks!

on April 10, 2009 at 5:53 pmMads TorgersenGlad you like it! Alonzo (Church) and folks really rock, and it felt so great, back in 07, to finally be able to go wild with their stuff even in C#.

Since then we’ve added F# to our portfolio of languages, and I am happy to see that you are using that. While we are borrowing a little functional goodness into C#, if you want the whole package F# is really the way to go. I can’t wait till we ship it!

Thanks for your kind mention! I blushed a lot and colleagues made fun of me at a meeting, so that was all as it should be :-)

Mads

on April 10, 2009 at 6:09 pmDustin CampbellI get to work Mads on a daily basis, and he’s still my hero.

Seriously.