← Back to 2017 talks### Bibliography

### Transcript

Today, I'm going to be talking about notations and programming languages and ways in which they're similar and ways in which I think that our programming language design could learn from some things about mathematical notations that already exist. So first, I'm going to talk about some notations that are very visual, that give us an intuitive for what's going on by how they're designed. Lots of the notations that we use in computer science do this. There are also some notations in math, in physics, and biology.

So to give some examples from computer science, you have a binary tree. One way of representing the binary tree could just be with this naive, code-like representation where you say, OK, tree of one, comma, tree of two, dot, dot, dot. But for whatever reason, we don't usually do that. Instead, if you see somebody writing at a blackboard, they're usually going to draw the tree with the nodes and the leaves, this branching structure so that you can get this visual sense for what's going on with this data structure.

And this doesn't happen for just trees. You also have hash maps. Other data structures have their own way of being visually represented on a blackboard instead of just with code. Some more examples-- state machines. Instead of listing out the entire list of states and all of the transitions between states, when you see people talk about state machines, they'll usually draw out-- OK, here's what's going on here. It transitions over to this part this way so that you can get a sense of what the state machine is doing.

Final example from computer science-- Boolean circuits. It makes more sense that these are visual because the original circuitry is very physical. You have to lay out the gates of the wires. So instead of representing circuits, usually, with this code-like syntax, when we're communicating about them to each other, we have this particular way of drawing them with the gates and the wires as a way of communicating more quickly and more intuitively what inputs are going where.

So those are some examples from computer science. I also want to tell you some examples of visual notations from fields outside of computer science. The first thing I want to talk about is protein signaling pathways in biology. So there are a lot of different abstraction levels in biology that you might want to talk about if you're modeling a biological system. One of them is let's just take some proteins and see how they interact with each other. So you might have a list of rules-- like, OK, protein A inhibits protein B because we've seen experiments that say it does this. You might collect a bunch of these rules from a bunch of different papers.

There's actually a programming language that you can use-- many programming languages, in fact-- that will allow you to take rules like this, plug them into a system, and see what happens. So that's something that you can do. But there's also a visual notation for describing rules like this where you say a inhibits b, a inhibits c, b activates c. You can draw this out as a network and see what the network is doing via this notation.

Another example outside of computer science-- Feynman diagrams. I've actually been really interested in the history of Feynman diagrams and did some reading about this. So Feynman diagrams were invented for the field of quantum electrodynamics. This is sometimes abbreviated QED. An example of a problem in this field-- you might have two electrons scattering, coming into close contact with each other, and going out. You might want to find what's the probability of that event happening as opposed to some other event that might happen when the two electrons are in close proximity to each other.

Before Feynman diagrams, the state of the art of this was that you would have a whole bunch of algebra that you had to do. You had to do a kind of tally of all of the different ways that the event might happen, take some integrals, sum them all up, simplify them. In this article about the history of Feynman diagrams, this historian of physics, David Kaiser, talks about how this formalism was notoriously cumbersome, an algebraic nightmare of distinct terms to track and evaluate. Individual contributions to the overall calculation would stretch over four or five lines of algebra. And this was pretty unpleasant to work with.

There's also this issue where you would have some terms go to infinity and some terms not go to infinity. So this entire thing needed a more intuitive way of working with it and figuring out how to cancel out the right things. So this is an example of one of the terms that you might have to add up in order to do this calculation.

What the Feynman diagram does-- Feynman went to this conference with a bunch of physicists, and he went, and he presented this new notation to them. And he started drawing on the board some lines to represent electrons and some lines to represent photons, and then from there, went and did the multiplication of all of the different events that might happen-- the probability that the electrons come close to each other at these particular points, and then the probability that a photon is released, the probability that the photon is absorbed. You take all of those, multiply them together.

And then from here, instead of having to do all of your tallying with algebra, what you can instead do is do all of your tallying with the diagram. So you can go and say, OK, this is one kind of diagram that might happen. You could draw a different one where the photon scatters in a different direction. You could have two different photons being exchanged. You can do the tallying of different terms in this different, more visual domain. So that was another innovation in notation that's happened that I want to bring up.

So those are all examples of visual notations-- some from computer science, some not. But I think we could have more use of visual notation in computer science. The reason I think this is because my day-to-day work-- I work with Python a lot-- looks like this. This is some code taken from the date time module. It's pretty normal Python, but it contrasts pretty strongly with those intuitive diagrams that we saw, being able to see visually what's going on in the code.

So what I want to know is what is the visual equivalent of something like this, the code that I work with on a day-to-day basis. Is there anything clever that we can do to take this and make it more visual for me? And to some extent, I found tools that I like for this. One tool that I want to mention is Snake Food for Python, which visualizes dependencies in your Python code base. This is an example. This is what happens if you take snake food and run it on Flask. So this is pretty nice.

But I think I want something even more nuanced than this that will take the file that I'm working in, take the different functions that I have, and show me their interdependencies, show me which functions are being pulled in from other files, get a better sense for what's going on. So what I want is, in some sense, a notation for the entire language. I'm not going to give you a notation for the entire language.

What I am going to give to you is a notation for a specific language. So my claim is it's possible to create a visual notation for an entire programming language. If you know anything about me, you know that I will prove this claim using lambda calculus. And as we've shown, lambda calculus and Python are equivalent. So this should be good.

So for the next little while, I'm going to be going into detail in this thing that I've invented called lambda circuitry, which takes lambda calculus and makes it a visual notation. Then we're going to back out of that and talk about notations for a while and how notations relate to programming abstractions. So this will be a deep dive into lambda calculus for a little bit.

And if you haven't seen lambda calculus before, you can take this as an opportunity to learn about it. If you have, I hope you'll find the notation part to be an interesting topping on top of that. So lambda calculus was this model of computation devised by Alonzo Church in 1935, kind of contemporary with Turing. They were both coming up with formalizations of computation to solve similar problems. But they came up with very different formalizations of computation.

In particular, lambda calculus is different because it consists of expressions made only of functions and the arguments to those functions. So everything in lambda calculus is a function. An example of a function that you could write in lambda calculus to show you the syntax-- lambda xx is the identity function. The lambda x denotes the argument, and then x is what is returned by the function. It's the argument that was passed in. Lambda x x of is 2. To give an example with more than just one argument, you could have lambda x lambda y, which takes in two arguments. X plus y of 2 and 3 is 5.

And another final example-- if you have lambda x x of x itself applied to lambda x x of x, a function can be passed in as an argument. So you take that second function, you pass it in, and what you get is it becomes applied to itself, which then loops forever. So we're going to take this language and see if we can make a visual notation for it. When designing this notation, the important thing that I want to represent is that the inputs flow to outputs. And along the way, they might pass through functions that modify them in different or interesting ways.

So the way we're going to write the identity function is with this arrow that goes from input to output. I have a little divot representing the argument that needs to be passed in. And then at the end, we have what is the value that is returned. For more complication to this, you could have multiple arguments. In order to represent one function being applied to another, what I've decided on is this box notation. So you have x, which is the first argument, going top down just by convention. x is applied to y, which is the second argument of this lambda expression.

And then the result of x applied to y is what is the return value. So you have x being applied to y. You can see that y is passing through this box and getting modified. This is going to be how we represent lambda calculus terms.

An example of how this might go-- if you have the identity applied to 5, we're going to represent this subexpression as a box containing an arrow all on its own. When you evaluate this, normally you'd get 5. In this case, you have this function comes down into this box, gets plugged in. You can imagine the wires plugging in on either side. And what you have is 5 is the return value.

Some more examples of basic things from lambda calculus and what they look like-- so we can build Boolean logic in lambda calculus using just functions. In order to build Boolean logic, you need true and you need false. Usually, one implementation of these is that true is the function that takes in two arguments and returns the first one and that false is the function that takes in two arguments and returns the second one. So if we want to draw these, what they would look like in lambda circuitry would be this. You have two arguments. One of them is pass-through. The other one just fizzles and dies and is forgotten. So that's true and false.

If you have true and false, you need to be able to use true and false in order to direct code flow. So we also need a construct that does if a, then b, else c that takes in that Boolean and actually uses it. One implementation of this in the lambda calculus using those two true and false decisions that we have is that you've taken a and b and c, and then you call a with b and c. So a is going to be a Boolean value. It takes in b and c. If it's true, it returns b. If it's false, it returns c. That's just how true and false work in the formalism that we've defined.

The way that this gets drawn is something like this. I've annotated it in green, but I don't consider the green to be part of the notation. So you have the arguments going in top down again. a applies to b, which then applies to c. You could instead, if you wanted to, write this as a box which takes in both b and c at the same time instead of drawing it so that you have a and b and then c. This is called currying. It will work the same either way.

So to give an example of what this might look like when you execute it, if we want to compute if true, then 1, else 0, naturally, we know the result of this is going to be 1 because it's true. What we would write down in lambda calculus would be first the if then else construct, then the Boolean true. Then 1 and 0 are just placeholders for whatever we decide 1 and 0 are going to be later.

The way this ends up looking in this notation is that you have your if then else structure. It first is applied to the Boolean true, and then it's applied to each of the arguments in turn. And as you go evaluating this, what you do is you plug the boxes into the other boxes, and you wire the wires together. So for the first step of evaluation, you get true gets passed in as the first input to this partially evaluated if then else block. You can do this for the 1 and the 0 also so that you get first the true, then the 1 and the 0 now being passed in to what was originally the structure in the if then else block.

Then after this, we can imagine this box going down into each of the boxes and being applied. So what we get is that 1 is the return value and 0 fizzles and is lost. So that's an example of this and how you would evaluate it. I'm going to go through some other examples pretty quickly. Naturally, this is the one that we wanted returned out. So linked lists are also something that you can build in lambda calculus. I'll do these pretty fast.

You can actually implement linked lists using the true false and if then else that we've already got. So one way that you can implement this-- you need the functions first, second, and pair. You can make pair be something that takes in two values a and b and then waits for a third thing and then checks the value of the third thing, a Boolean, and returns you a or b. Then your implementations of first and second are pretty simple. You pass in true to p, true to the pair that you receive, and false to the pair that you receive.

So just to give you a perception of what these might look like if we wrote them out-- first, substituting true false and if then else for the stuff that we already had before, this is what you get in pure lambda calculus. And then if you want to draw these using the circuits, what you'll get is something like this. So first and second each take in one argument, and then apply a Boolean to it-- or apply it to a Boolean, and then the pair takes in three arguments, but waits for the third argument before it can do anything because there's this dependency here where it needs to be applied to each of the other two arguments.

You can also do-- to give two more examples with combinators, the omega combinator and the y combinator, the omega combinator is a lambda calculus term which applies an input to itself. This is the thing that we saw looped forever if you applied it to itself. You can draw this like this. You have one input, and that input gets applied to itself. So there's this little splitting motion that occurs.

If you have two of them, you have one being applied to the other. And then if you try to evaluate this, you'll plug it in. You get something like this where that's the function that was applied. But then this, in turn, splits again, so it loops back up to what it was originally, and then back down, up and down forever.

One last example here since talking about the lambda calculus is not complete without talking about the y combinator. There's this thing called the fixed point combinator, which is what allows you to do recursion in the lambda calculus. So it's what allows you to write recursive functions that have access to their own value. One implementation of the fixed point combinator is the y combinator, which is lambda f, lambda x, f ox x of x of lambda x, f of x of x. What this looks like if you draw it out in the circuitry is this. You can see there are some little omega pieces in there, so we can get the sense that something crazy with loops is going to happen.

If you want to double check that this is, in fact, a fixed point combinator, you can apply it to f, and then f is this input that gets passed in. Evaluating this a couple of more steps, you have f becomes the function in this block. We then apply this block to itself. And all of a sudden, there's another f here. So the result is f of y of f where y of f is equal to this earlier stage. But then we've applied it, and now there's another f right before the very last chunk of output.

So that's the end of me talking about lambda calculus. The point of this is to show you an example of a notation for a programming language. If you're interested in this, suggested further exercises-- read the Wikipedia page about lambda calculus. You can learn about a bunch of these interesting-- how you implement different pieces there. You can try to implement numbers-- addition, subtraction, multiplication, and exponentiation on your numbers, like how we implemented Booleans. You can try to do math produce and filter for a [INAUDIBLE] limitation, like the one we showed earlier. And I recommend the book To Mock a Mockingbird, which is full of different puzzles about combinators and about the calculus of combinators.

I also want to mention that this was not the first-- I'm not the first person to come up with this idea. Other people have come up with an idea like this before. In 1996, there's this thing published on the internet that says "To Dissect a Mockingbird," which has a pretty similar notation, maybe with some differences. Depending on what your aesthetics are, you might prefer this one, or you might prefer the other one. And this actually goes through some of the exercises in To Mock a Mockingbird and shows what happens.

There are other implementations of this, too. There's this cute thing called the Alligator Eggs Game, which is actually something you can print out and cut out and play with people by moving these alligators and eggs around where there are rules governing how the alligators behave, and it ends up simulating a lambda calculus evaluator. And lastly, there is this thing called Visual Lambda Calculus, which is this colored bubble notation which comes with a GUI that you can play around with. So people have thought about this before, and there are lots of competing standards for which thing might be the most interesting. But I think they're all cool.

So that's the end of that tangent. Now I want to take a step back and talk about notations again and talk about notations with respect to abstractions. So I talked about these notations before. One of the things that's interesting about these notations is that each one is associated with a very specific abstraction. When you're talking about these circuit diagrams, they can only pertain to talking about circuits. Trees can only pertain to talking about trees. State machines can only pertain to talking about state machines.

You can maybe take the visual ideas that are there and build on them, but that would be a different abstraction. Each one is very specific to a very specific problem, and you can't take it and be like, OK, I'm going to use circuit diagrams to do my algebra homework. That's just not going to work. So each of these is tied to a very specific abstraction, and the notation can only exist where there is an abstraction already present.

Another thing that I want to point out is that abstractions-- I guess I'm going to differ from-- there's the idea of-- people talk about abstractions, abstractions, abstractions, abstractions. I actually had to refrain from titling this slide like "Abstractions" three times when I was making this presentation. I think abstractions are very valuable because the thing about them is that you can choose to use them or not. I am not impeded in doing my algebra homework from the fact that circuit diagrams are out there for me to use. But if I wanted to, I could use them, and they would be useful.

I think abstractions are a trade-off. Any abstraction is going to trade the freedom of what you could use it to solve for the specificity that it gives you for the particular problem it's solving. But I think this is still valuable because it's nice to have a tool kit where-- OK, you can take, like, I want to be this specific in how I'm dealing with the problem and choose an abstraction that's right there that's the right one for you to be dealing with right now. So I think building this toolkit is really valuable, and I building this toolkit will give us more opportunities to create notations on top of the parts of our toolkit.

I want to talk a bit about abstractions that I like that I want to encourage people to read more about or make more things like. I am fond of abstractions that limit the code that you're writing to be more correct. And I want to see notations for dealing with those, too. I'm a fan of static type checking. I'm also a fan of dependent types for code correctness.

One interesting project that I recommend you look at is called Daphne. It's a project by Microsoft Research, an experimental programming language where it's like static type checking, but plus. In addition to specifying types, you can also specify-- OK, this function that I'm writing has this precondition and this post-condition. And that will be checked for you. And it will be like, did you mean to do something else here if something goes wrong.

I also want to talk about executable biology really quickly. I mentioned that there are lots of different abstraction levels in biology. You could have proteins, protein folding. You could have protein-signaling networks. You can have cells communicating with each other. You can have tissues and entire organisms. So biology, as a field, is full of different abstractions. And I think there's this huge space that needs to be filled for creating programming languages that will deal with each of the different layers and that can communicate with each other. I think there's a real absence right there right now.

An example of a programming language for executable biology is Kappa, which does modeling of rule-based protein interaction networks. So it has rules for which proteins interact with which other ones, and what complexes they form, and what the results of those are. But I think there is room for more programming languages that deal with different layers here.

One last thing I want to mention-- I think it's really interesting to come up with new programming models that might provide us better abstractions on top of what we're already working with. One example that I want to point at-- there is another research programming language called Pict, which does concurrent programming. It's based on a model of computation called the pi calculus, which is another calculus that nobody has heard of before.

The really interesting thing about this is that it does both sequential composition of code-- so you can say, I want this to come after this to come after this-- and parallel composition of code. So it allows you the space to say, hey, I have this long process that needs to run and this other long process to run. I don't care what order they happen in. Just do them. They aren't going to be interdependent amongst each other, which isn't really a thing that we have a language for dealing with in many other programming languages.

While I'm talking about notations in programming languages, there is one final bit of research that I couldn't resist including in this presentation. One interesting thing about the history of notations is that it is also full of wars about which ones are correct and which ones are not correct, going back to the late 1800s at least. That's the example I have. So vectors versus quaternions-- in the late 1800s, a bunch of mathematicians were trying to come up with formalisms for talking about electromagnetism in electromagnetic theory. And they were getting into fights about which notation would be better to use for this.

Oliver Heaviside wrote this textbook Electromagnetic Theory in 1883. And in his textbook, he writes this of the other people working with quaternions. He was talking about this operator that they were calling a quaternion. He says, "it's really a vector. It is as unfair to call a vector a quaternion as it is to call a man a quadruped--" in his textbook. He also writes, "students who had found quaternions quite hopeless could understand my vectors very well--" in his textbook.

So as this went on, standards proliferated. People were coming up with different notations for vector operations. There's this great book called A History of Mathematical Notations written by Florian Cajori in 1928 where Florian writes, of the different vector notations, there are all of these ones that proliferated. Just for the way of writing the operation curl, you'd have curl, quirrel, wrought, vort, rot where the vector has a tilde, and of course, the usual nabla cross, which is one that I'm used to. So that was the state in the 1900s or so, early 1900s. This is a history book, so it's written in 1928.

At that point, they started appointing committees to try to solve this problem. In 1903, the committee couldn't decide. In 1908, they had this Special Commission of the International Congress of Mathematicians. They couldn't decide, either. In A History of Mathematical Notations, he actually writes that whenever they had a committee and two of the committee members tried to compromise, what would be the result of their compromise is that they would come up with three more different notations which everyone would then have to fight over.

So I guess the moral of the story is wars over different notations and picking the right notation-- they happen. And I think maybe having these many multiple competing standards is the price of innovation. So maybe it's good that we're innovating. We have lots of different things that are competing with each other. We can eventually pick which one we like the most.

So here's what I think programming languages can learn from notations. One is be visual. I want to see more visual tools for interacting with programming languages. One is build more abstractions since notations can only exist on top of them. The last one is standardization wars happen sometimes. And if you're interested in the things I've talked about in this talk, I'd recommend at least these three books. Drawing Theories Apart is by the historian David Kaiser and talks about the history of Feynman diagrams, and how people communicated them to each other, and how they evolved and spread across communities of physicists.

To Mock a Mockingbird contains combinatory logic puzzles. And A History of Mathematical Notations is pretty fun to read. It has that vector stuff as well as everything back to ancient Egypt.

So thanks for attending this talk.

[APPLAUSE]

To Mock a Mockingbird (Raymond Smullyan)

To Dissect a Mockingbird (article) (David C Keenan)

Alligator Eggs (article) (Bret Victor)

Electromagnetic Theory (Oliver Heaviside)

Drawing Theories Apart (David Kaiser)

A History of Mathematical Notations (Florian Cajori)

(Editor's note: transcripts don't do talks justice. This transcript is useful for searching and reference, but we recommend watching the video rather than reading the transcript alone! For a reader of typical speed, reading this will take 15% less time than watching the video, but you'll miss out on body language and the speaker's slides!)

So I'm Chelsea. I'm a software engineer at Wave. You might know me from this talk that I gave in the past at PyCon called "One Linerizer" in which I demonstrate how to write any Python program as one line of code via lambda calculus, so taking every piece of functionality in Python and rewrapping as something in lambda calculus or list comprehensions, abusing the syntax as much as possible. So that's what I've given a talk on before.Today, I'm going to be talking about notations and programming languages and ways in which they're similar and ways in which I think that our programming language design could learn from some things about mathematical notations that already exist. So first, I'm going to talk about some notations that are very visual, that give us an intuitive for what's going on by how they're designed. Lots of the notations that we use in computer science do this. There are also some notations in math, in physics, and biology.

So to give some examples from computer science, you have a binary tree. One way of representing the binary tree could just be with this naive, code-like representation where you say, OK, tree of one, comma, tree of two, dot, dot, dot. But for whatever reason, we don't usually do that. Instead, if you see somebody writing at a blackboard, they're usually going to draw the tree with the nodes and the leaves, this branching structure so that you can get this visual sense for what's going on with this data structure.

And this doesn't happen for just trees. You also have hash maps. Other data structures have their own way of being visually represented on a blackboard instead of just with code. Some more examples-- state machines. Instead of listing out the entire list of states and all of the transitions between states, when you see people talk about state machines, they'll usually draw out-- OK, here's what's going on here. It transitions over to this part this way so that you can get a sense of what the state machine is doing.

Final example from computer science-- Boolean circuits. It makes more sense that these are visual because the original circuitry is very physical. You have to lay out the gates of the wires. So instead of representing circuits, usually, with this code-like syntax, when we're communicating about them to each other, we have this particular way of drawing them with the gates and the wires as a way of communicating more quickly and more intuitively what inputs are going where.

So those are some examples from computer science. I also want to tell you some examples of visual notations from fields outside of computer science. The first thing I want to talk about is protein signaling pathways in biology. So there are a lot of different abstraction levels in biology that you might want to talk about if you're modeling a biological system. One of them is let's just take some proteins and see how they interact with each other. So you might have a list of rules-- like, OK, protein A inhibits protein B because we've seen experiments that say it does this. You might collect a bunch of these rules from a bunch of different papers.

There's actually a programming language that you can use-- many programming languages, in fact-- that will allow you to take rules like this, plug them into a system, and see what happens. So that's something that you can do. But there's also a visual notation for describing rules like this where you say a inhibits b, a inhibits c, b activates c. You can draw this out as a network and see what the network is doing via this notation.

Another example outside of computer science-- Feynman diagrams. I've actually been really interested in the history of Feynman diagrams and did some reading about this. So Feynman diagrams were invented for the field of quantum electrodynamics. This is sometimes abbreviated QED. An example of a problem in this field-- you might have two electrons scattering, coming into close contact with each other, and going out. You might want to find what's the probability of that event happening as opposed to some other event that might happen when the two electrons are in close proximity to each other.

Before Feynman diagrams, the state of the art of this was that you would have a whole bunch of algebra that you had to do. You had to do a kind of tally of all of the different ways that the event might happen, take some integrals, sum them all up, simplify them. In this article about the history of Feynman diagrams, this historian of physics, David Kaiser, talks about how this formalism was notoriously cumbersome, an algebraic nightmare of distinct terms to track and evaluate. Individual contributions to the overall calculation would stretch over four or five lines of algebra. And this was pretty unpleasant to work with.

There's also this issue where you would have some terms go to infinity and some terms not go to infinity. So this entire thing needed a more intuitive way of working with it and figuring out how to cancel out the right things. So this is an example of one of the terms that you might have to add up in order to do this calculation.

What the Feynman diagram does-- Feynman went to this conference with a bunch of physicists, and he went, and he presented this new notation to them. And he started drawing on the board some lines to represent electrons and some lines to represent photons, and then from there, went and did the multiplication of all of the different events that might happen-- the probability that the electrons come close to each other at these particular points, and then the probability that a photon is released, the probability that the photon is absorbed. You take all of those, multiply them together.

And then from here, instead of having to do all of your tallying with algebra, what you can instead do is do all of your tallying with the diagram. So you can go and say, OK, this is one kind of diagram that might happen. You could draw a different one where the photon scatters in a different direction. You could have two different photons being exchanged. You can do the tallying of different terms in this different, more visual domain. So that was another innovation in notation that's happened that I want to bring up.

So those are all examples of visual notations-- some from computer science, some not. But I think we could have more use of visual notation in computer science. The reason I think this is because my day-to-day work-- I work with Python a lot-- looks like this. This is some code taken from the date time module. It's pretty normal Python, but it contrasts pretty strongly with those intuitive diagrams that we saw, being able to see visually what's going on in the code.

So what I want to know is what is the visual equivalent of something like this, the code that I work with on a day-to-day basis. Is there anything clever that we can do to take this and make it more visual for me? And to some extent, I found tools that I like for this. One tool that I want to mention is Snake Food for Python, which visualizes dependencies in your Python code base. This is an example. This is what happens if you take snake food and run it on Flask. So this is pretty nice.

But I think I want something even more nuanced than this that will take the file that I'm working in, take the different functions that I have, and show me their interdependencies, show me which functions are being pulled in from other files, get a better sense for what's going on. So what I want is, in some sense, a notation for the entire language. I'm not going to give you a notation for the entire language.

What I am going to give to you is a notation for a specific language. So my claim is it's possible to create a visual notation for an entire programming language. If you know anything about me, you know that I will prove this claim using lambda calculus. And as we've shown, lambda calculus and Python are equivalent. So this should be good.

So for the next little while, I'm going to be going into detail in this thing that I've invented called lambda circuitry, which takes lambda calculus and makes it a visual notation. Then we're going to back out of that and talk about notations for a while and how notations relate to programming abstractions. So this will be a deep dive into lambda calculus for a little bit.

And if you haven't seen lambda calculus before, you can take this as an opportunity to learn about it. If you have, I hope you'll find the notation part to be an interesting topping on top of that. So lambda calculus was this model of computation devised by Alonzo Church in 1935, kind of contemporary with Turing. They were both coming up with formalizations of computation to solve similar problems. But they came up with very different formalizations of computation.

In particular, lambda calculus is different because it consists of expressions made only of functions and the arguments to those functions. So everything in lambda calculus is a function. An example of a function that you could write in lambda calculus to show you the syntax-- lambda xx is the identity function. The lambda x denotes the argument, and then x is what is returned by the function. It's the argument that was passed in. Lambda x x of is 2. To give an example with more than just one argument, you could have lambda x lambda y, which takes in two arguments. X plus y of 2 and 3 is 5.

And another final example-- if you have lambda x x of x itself applied to lambda x x of x, a function can be passed in as an argument. So you take that second function, you pass it in, and what you get is it becomes applied to itself, which then loops forever. So we're going to take this language and see if we can make a visual notation for it. When designing this notation, the important thing that I want to represent is that the inputs flow to outputs. And along the way, they might pass through functions that modify them in different or interesting ways.

So the way we're going to write the identity function is with this arrow that goes from input to output. I have a little divot representing the argument that needs to be passed in. And then at the end, we have what is the value that is returned. For more complication to this, you could have multiple arguments. In order to represent one function being applied to another, what I've decided on is this box notation. So you have x, which is the first argument, going top down just by convention. x is applied to y, which is the second argument of this lambda expression.

And then the result of x applied to y is what is the return value. So you have x being applied to y. You can see that y is passing through this box and getting modified. This is going to be how we represent lambda calculus terms.

An example of how this might go-- if you have the identity applied to 5, we're going to represent this subexpression as a box containing an arrow all on its own. When you evaluate this, normally you'd get 5. In this case, you have this function comes down into this box, gets plugged in. You can imagine the wires plugging in on either side. And what you have is 5 is the return value.

Some more examples of basic things from lambda calculus and what they look like-- so we can build Boolean logic in lambda calculus using just functions. In order to build Boolean logic, you need true and you need false. Usually, one implementation of these is that true is the function that takes in two arguments and returns the first one and that false is the function that takes in two arguments and returns the second one. So if we want to draw these, what they would look like in lambda circuitry would be this. You have two arguments. One of them is pass-through. The other one just fizzles and dies and is forgotten. So that's true and false.

If you have true and false, you need to be able to use true and false in order to direct code flow. So we also need a construct that does if a, then b, else c that takes in that Boolean and actually uses it. One implementation of this in the lambda calculus using those two true and false decisions that we have is that you've taken a and b and c, and then you call a with b and c. So a is going to be a Boolean value. It takes in b and c. If it's true, it returns b. If it's false, it returns c. That's just how true and false work in the formalism that we've defined.

The way that this gets drawn is something like this. I've annotated it in green, but I don't consider the green to be part of the notation. So you have the arguments going in top down again. a applies to b, which then applies to c. You could instead, if you wanted to, write this as a box which takes in both b and c at the same time instead of drawing it so that you have a and b and then c. This is called currying. It will work the same either way.

So to give an example of what this might look like when you execute it, if we want to compute if true, then 1, else 0, naturally, we know the result of this is going to be 1 because it's true. What we would write down in lambda calculus would be first the if then else construct, then the Boolean true. Then 1 and 0 are just placeholders for whatever we decide 1 and 0 are going to be later.

The way this ends up looking in this notation is that you have your if then else structure. It first is applied to the Boolean true, and then it's applied to each of the arguments in turn. And as you go evaluating this, what you do is you plug the boxes into the other boxes, and you wire the wires together. So for the first step of evaluation, you get true gets passed in as the first input to this partially evaluated if then else block. You can do this for the 1 and the 0 also so that you get first the true, then the 1 and the 0 now being passed in to what was originally the structure in the if then else block.

Then after this, we can imagine this box going down into each of the boxes and being applied. So what we get is that 1 is the return value and 0 fizzles and is lost. So that's an example of this and how you would evaluate it. I'm going to go through some other examples pretty quickly. Naturally, this is the one that we wanted returned out. So linked lists are also something that you can build in lambda calculus. I'll do these pretty fast.

You can actually implement linked lists using the true false and if then else that we've already got. So one way that you can implement this-- you need the functions first, second, and pair. You can make pair be something that takes in two values a and b and then waits for a third thing and then checks the value of the third thing, a Boolean, and returns you a or b. Then your implementations of first and second are pretty simple. You pass in true to p, true to the pair that you receive, and false to the pair that you receive.

So just to give you a perception of what these might look like if we wrote them out-- first, substituting true false and if then else for the stuff that we already had before, this is what you get in pure lambda calculus. And then if you want to draw these using the circuits, what you'll get is something like this. So first and second each take in one argument, and then apply a Boolean to it-- or apply it to a Boolean, and then the pair takes in three arguments, but waits for the third argument before it can do anything because there's this dependency here where it needs to be applied to each of the other two arguments.

You can also do-- to give two more examples with combinators, the omega combinator and the y combinator, the omega combinator is a lambda calculus term which applies an input to itself. This is the thing that we saw looped forever if you applied it to itself. You can draw this like this. You have one input, and that input gets applied to itself. So there's this little splitting motion that occurs.

If you have two of them, you have one being applied to the other. And then if you try to evaluate this, you'll plug it in. You get something like this where that's the function that was applied. But then this, in turn, splits again, so it loops back up to what it was originally, and then back down, up and down forever.

One last example here since talking about the lambda calculus is not complete without talking about the y combinator. There's this thing called the fixed point combinator, which is what allows you to do recursion in the lambda calculus. So it's what allows you to write recursive functions that have access to their own value. One implementation of the fixed point combinator is the y combinator, which is lambda f, lambda x, f ox x of x of lambda x, f of x of x. What this looks like if you draw it out in the circuitry is this. You can see there are some little omega pieces in there, so we can get the sense that something crazy with loops is going to happen.

If you want to double check that this is, in fact, a fixed point combinator, you can apply it to f, and then f is this input that gets passed in. Evaluating this a couple of more steps, you have f becomes the function in this block. We then apply this block to itself. And all of a sudden, there's another f here. So the result is f of y of f where y of f is equal to this earlier stage. But then we've applied it, and now there's another f right before the very last chunk of output.

So that's the end of me talking about lambda calculus. The point of this is to show you an example of a notation for a programming language. If you're interested in this, suggested further exercises-- read the Wikipedia page about lambda calculus. You can learn about a bunch of these interesting-- how you implement different pieces there. You can try to implement numbers-- addition, subtraction, multiplication, and exponentiation on your numbers, like how we implemented Booleans. You can try to do math produce and filter for a [INAUDIBLE] limitation, like the one we showed earlier. And I recommend the book To Mock a Mockingbird, which is full of different puzzles about combinators and about the calculus of combinators.

I also want to mention that this was not the first-- I'm not the first person to come up with this idea. Other people have come up with an idea like this before. In 1996, there's this thing published on the internet that says "To Dissect a Mockingbird," which has a pretty similar notation, maybe with some differences. Depending on what your aesthetics are, you might prefer this one, or you might prefer the other one. And this actually goes through some of the exercises in To Mock a Mockingbird and shows what happens.

There are other implementations of this, too. There's this cute thing called the Alligator Eggs Game, which is actually something you can print out and cut out and play with people by moving these alligators and eggs around where there are rules governing how the alligators behave, and it ends up simulating a lambda calculus evaluator. And lastly, there is this thing called Visual Lambda Calculus, which is this colored bubble notation which comes with a GUI that you can play around with. So people have thought about this before, and there are lots of competing standards for which thing might be the most interesting. But I think they're all cool.

So that's the end of that tangent. Now I want to take a step back and talk about notations again and talk about notations with respect to abstractions. So I talked about these notations before. One of the things that's interesting about these notations is that each one is associated with a very specific abstraction. When you're talking about these circuit diagrams, they can only pertain to talking about circuits. Trees can only pertain to talking about trees. State machines can only pertain to talking about state machines.

You can maybe take the visual ideas that are there and build on them, but that would be a different abstraction. Each one is very specific to a very specific problem, and you can't take it and be like, OK, I'm going to use circuit diagrams to do my algebra homework. That's just not going to work. So each of these is tied to a very specific abstraction, and the notation can only exist where there is an abstraction already present.

Another thing that I want to point out is that abstractions-- I guess I'm going to differ from-- there's the idea of-- people talk about abstractions, abstractions, abstractions, abstractions. I actually had to refrain from titling this slide like "Abstractions" three times when I was making this presentation. I think abstractions are very valuable because the thing about them is that you can choose to use them or not. I am not impeded in doing my algebra homework from the fact that circuit diagrams are out there for me to use. But if I wanted to, I could use them, and they would be useful.

I think abstractions are a trade-off. Any abstraction is going to trade the freedom of what you could use it to solve for the specificity that it gives you for the particular problem it's solving. But I think this is still valuable because it's nice to have a tool kit where-- OK, you can take, like, I want to be this specific in how I'm dealing with the problem and choose an abstraction that's right there that's the right one for you to be dealing with right now. So I think building this toolkit is really valuable, and I building this toolkit will give us more opportunities to create notations on top of the parts of our toolkit.

I want to talk a bit about abstractions that I like that I want to encourage people to read more about or make more things like. I am fond of abstractions that limit the code that you're writing to be more correct. And I want to see notations for dealing with those, too. I'm a fan of static type checking. I'm also a fan of dependent types for code correctness.

One interesting project that I recommend you look at is called Daphne. It's a project by Microsoft Research, an experimental programming language where it's like static type checking, but plus. In addition to specifying types, you can also specify-- OK, this function that I'm writing has this precondition and this post-condition. And that will be checked for you. And it will be like, did you mean to do something else here if something goes wrong.

I also want to talk about executable biology really quickly. I mentioned that there are lots of different abstraction levels in biology. You could have proteins, protein folding. You could have protein-signaling networks. You can have cells communicating with each other. You can have tissues and entire organisms. So biology, as a field, is full of different abstractions. And I think there's this huge space that needs to be filled for creating programming languages that will deal with each of the different layers and that can communicate with each other. I think there's a real absence right there right now.

An example of a programming language for executable biology is Kappa, which does modeling of rule-based protein interaction networks. So it has rules for which proteins interact with which other ones, and what complexes they form, and what the results of those are. But I think there is room for more programming languages that deal with different layers here.

One last thing I want to mention-- I think it's really interesting to come up with new programming models that might provide us better abstractions on top of what we're already working with. One example that I want to point at-- there is another research programming language called Pict, which does concurrent programming. It's based on a model of computation called the pi calculus, which is another calculus that nobody has heard of before.

The really interesting thing about this is that it does both sequential composition of code-- so you can say, I want this to come after this to come after this-- and parallel composition of code. So it allows you the space to say, hey, I have this long process that needs to run and this other long process to run. I don't care what order they happen in. Just do them. They aren't going to be interdependent amongst each other, which isn't really a thing that we have a language for dealing with in many other programming languages.

While I'm talking about notations in programming languages, there is one final bit of research that I couldn't resist including in this presentation. One interesting thing about the history of notations is that it is also full of wars about which ones are correct and which ones are not correct, going back to the late 1800s at least. That's the example I have. So vectors versus quaternions-- in the late 1800s, a bunch of mathematicians were trying to come up with formalisms for talking about electromagnetism in electromagnetic theory. And they were getting into fights about which notation would be better to use for this.

Oliver Heaviside wrote this textbook Electromagnetic Theory in 1883. And in his textbook, he writes this of the other people working with quaternions. He was talking about this operator that they were calling a quaternion. He says, "it's really a vector. It is as unfair to call a vector a quaternion as it is to call a man a quadruped--" in his textbook. He also writes, "students who had found quaternions quite hopeless could understand my vectors very well--" in his textbook.

So as this went on, standards proliferated. People were coming up with different notations for vector operations. There's this great book called A History of Mathematical Notations written by Florian Cajori in 1928 where Florian writes, of the different vector notations, there are all of these ones that proliferated. Just for the way of writing the operation curl, you'd have curl, quirrel, wrought, vort, rot where the vector has a tilde, and of course, the usual nabla cross, which is one that I'm used to. So that was the state in the 1900s or so, early 1900s. This is a history book, so it's written in 1928.

At that point, they started appointing committees to try to solve this problem. In 1903, the committee couldn't decide. In 1908, they had this Special Commission of the International Congress of Mathematicians. They couldn't decide, either. In A History of Mathematical Notations, he actually writes that whenever they had a committee and two of the committee members tried to compromise, what would be the result of their compromise is that they would come up with three more different notations which everyone would then have to fight over.

So I guess the moral of the story is wars over different notations and picking the right notation-- they happen. And I think maybe having these many multiple competing standards is the price of innovation. So maybe it's good that we're innovating. We have lots of different things that are competing with each other. We can eventually pick which one we like the most.

So here's what I think programming languages can learn from notations. One is be visual. I want to see more visual tools for interacting with programming languages. One is build more abstractions since notations can only exist on top of them. The last one is standardization wars happen sometimes. And if you're interested in the things I've talked about in this talk, I'd recommend at least these three books. Drawing Theories Apart is by the historian David Kaiser and talks about the history of Feynman diagrams, and how people communicated them to each other, and how they evolved and spread across communities of physicists.

To Mock a Mockingbird contains combinatory logic puzzles. And A History of Mathematical Notations is pretty fun to read. It has that vector stuff as well as everything back to ancient Egypt.

So thanks for attending this talk.

[APPLAUSE]