Substitution versus Environment-based Semantics

Let's step back and consider the semantics we've assumed so far for our lambda calculi. We've been doing this:

(\x. M) N ~~> M {x <- N}

where M {x <- N} is the result of substituting N in for free occurrences of x in M.

What do I mean by calling this a "semantics"? Wasn't this instead part of our proof-theory? Haven't we neglected to discuss any model theory for the lambda calculus?

Well, yes, that's right, we haven't much discussed what computer scientists call denotational semantics for the lambda calculus. That's what philosophers and linguists tend to think of as "semantics."

But computer scientists also recognize what they call operational semantics and axiomatic semantics. The former consists in the specification of an "abstract machine" for evaluating complex expressions of the language. The latter we won't discuss.

When it comes down to it, I'm not sure what difference there is between an operational semantics and a proof theory; though it should be noted that the operational semantics generally don't confine themselves to only using abstract machines whose transitions are identical to the ones licensed in the formal system being interpreted. Instead, any viable algorithm for reducing expressions to some sort of normal form (when they can be so reduced) would be entertained.

If we think of the denotations of lambda terms as sets of inter-convertible terms, and let the sets which have normal forms use that normal form as their representative, then operational semantics can be thought of as algorithms for deriving what a formula's denotation is. But it's not necessary to think of the denonations of lambda terms in that way; and even if we do, matters are complicated by the presence of non-normalizing terms.

In any case, the lambda evaluator we use on our website does evaluate expressions using the kind of operational semantics described above. We can call that a "substitution-based" semantics.

Let's consider a different kind of operational semantics. Instead of substituting N in for x, why don't we keep some extra data-structure on the side, where we note that x should now be understood to evaluate to whatever N does? In computer science jargon, such a data-structure is called an environment. Philosophers and linguists would tend to call it an assignment (though there are some subtleties about whether these are the same thing, which we'll discuss).

[Often in computer science settings, the lambda expression to be evaluated is first translated into de Bruijn notation, which we discussed in the first week of term. That is, instead of:

\x. x (\y. y x)

we have:

\. 1 (\. 1 2)

where each argument-occurrence records the distance between that occurrence and the \ which binds it. This has the advantage that the environment can then just be a stack, with the top element of the stack being what goes in for the outermost-bound argument, and so on.

Some students have asked: why do we need de Bruijn notation to represent environments that way? Can't we just assume our variables are of the form x1, x2, x3, and so on? And then the values that get assigned to those variables could just be kept in a stack, with the first value understood to be assigned to x1, and so on. Yes, we could do that. But the advantage of the de Bruijn notation is that we don't have to keep track of which variable is associated with which lambda binder. In the de Bruijn framework, the outermost binder always goes with the top-most member of the stack. In the proposed alternative, that's not so: we could have:

\x2. x2 (\x8. x8 x2)

In any event, though, we won't make use of de Bruijn notation because, though it makes the semantic algorithms more elegant, it also adds an extra level of conceptual complexity, which we don't want to do.]

Now with the environment-based semantics, instead of evaluating terms using substitution, like this:

(\x. M) N ~~> M {x <- N}

we'll instead evaluate them by manipulating their environment. To intepret (\x. M) N in environment e, we'll interpret M in an environment like e {x:= N} where x may have been changed to now be assigned to N.

A few comments. First, what the environment associates with the variable x will be expressions of the lambda calculus we're evaluating. If we understand the evaluation to be call-by-name, these expressions may be complexes of the form N P. If on the other hand, we understand the evaluation to be call-by-value, then these expressions will always be fully evaluated before being inserted into the environment. That means they'll never be of the form N P; but only of the form y or \y. P. The latter kinds of expressions are called "values." But "values" here are just certain kinds of expressions. (They could be the denotations of lambda expressions, if one thinks of the lambda expressions as all having normal-form lambda terms as their denotations, when possible.)

Second, there is a subtlety here we haven't yet discussed. Consider what should be the result of this:

let x = 1
in let f = \y. x
in let x = 2
in f 0

If we interpret the x in \y. x by the value it had where that function was being declared, and bound to f, then this complex expression should return 1. If on other hand we interpret the x to have the value it has where that function is given an argument and reduced, then the complex should return 2.

There is no right or wrong behavior here. Both are reasonable. The former behavior is called lexical scoping, and it's what we get in the untyped lambda calculus, for example. It's also what's most common in functional programming languages. It's easier for programmers to reason about.

The latter behavior is called dynamic scoping. Often it's easier for language-designers to implement. It is also useful in certain settings. But we will assume we're dealing with lexical scoping.

To implement lexical scoping, you can't just associate the bare formula \y. x to the variable f in an environment. You also have to keep track of what is the value in that context of all the free variables in the formula. This combination of a function expression together with the values of its free variables is called a function closure. There are different techniques for handling these. The technique we'll use here is conceptually simple: it just stashes away both the formula \y. x and a copy of the current environment. So even if we're later asked to evaluate f in a different environment, we have the original environment to use to lookup values forf's free variables. This is conceptually simple---but inefficient, because it stashes a copy of the entire current environment, when really we only need that part of the environment relevant to a few free variables. If you care to read up more about operational semantics for the lambda calculus, or the underpinnings of how Scheme or OCaml interpreters work under the hood, you can learn about other more elegant techniques. Here we'll keep things conceptually simple.

With these ideas settled, then, we can present an environment-based operational semantics for the untyped lambda calculus. Here is a call-by-value version, which assumes that expressions are always fully evaluated before being inserted into the environment.

  1. When e assigns some term v to x, then x fully (that is, terminally) reduces in environment e to v. We write that as: (e |- x) ==> v.

  2. (e |- \x. M) ==> closure(e, \x. M), where a closure is some data-structure (it might just be a pair of the environment e and the formula \x. M).

  3. If (e |- M) ==> closure(e2, \x. M2) and (e |- N) ==> v and (e2 {x:=v} |- M2) ==> u, then (e |- M N) ==> u. Here e2 {x:=v} is the environment which is just like e2 except it assigns v to x.

Explicitly manipulating the environment

In the machinery we just discussed, the environment handling was no part of the language being interpreted. It all took place in the interpreting algorithm. Sometimes, though, it's convenient to explicitly manipulate the environment in your program; or at least, some special portion of the environment, set aside to be manipulated in that way.

For example, a common programming exercise when students are learning languages like OCaml is to implement a simple arithmetic calculator. You'll suppose you're given some expressions of the following type:

type term = Constant of int
    | Multiplication of (term * term)
    | Addition of (term*term)
    | Variable of char
    | Let of (char*term*term);;

and then you'd evaluate it something like this:

let rec eval (t : term) = match t with
      Constant x -> x
    | Multiplication (t1,t2) -> (eval t1) * (eval t2)
    | Addition (t1,t2) -> (eval t1) + (eval t2)
    | Variable c -> ... (* we'll come back to this *)
    | Let (c,t1,t2) -> ... (* this too *)

With that in hand, you could then evaluate complex terms like Addition(Constant 1, Multiplication(Constant 2, Constant 3)).

But then how should you evaluate terms like Let('x',Constant 1,Addition(Variable 'x', Constant 2))? We'd want to carry along an environment that recorded that 'x' had been associated with the term Constant 1, so that we could retrieve that value when evaluating Addition(Variable 'x', Constant 2).

Notice that here our environments associate variables with (what from the perspective of our calculator language are) real values, like 2, not just value-denoting terms like Constant 2.

We'll work with a simple model of environments. They'll just be lists. So the empty environment is []. To modify an environment e like this: e {x:=1}, we'll use:

('x', 1) :: e

that is, e is a list of pairs, whose first members are chars, and whose second members are evaluation results. To lookup a value 'x' in e, we can use the List.assoc function, which behaves like this:

# List.assoc 'x' [('x',1); ('y',2)];;
- : int = 1
# List.assoc 'z' [('x',1); ('y',2)];;
Exception: Not_found.

Then we should give our eval function an extra argument for the environment:

let rec eval (t : term) (e: (char * int) list) = match t with
      Constant x -> x
    | Multiplication (t1,t2) -> (eval t1 e) * (eval t2 e)
    | Addition (t1,t2) -> (eval t1 e) + (eval t2 e)
    | Variable c ->
        (* lookup the value of c in the current environment
           This will fail if c isn't assigned anything by e *)
      List.assoc c e
    | Let (c,t1,t2) -> 
        (* evaluate t2 in a new environment where c has been associated
           with the result of evaluating t1 in the current environment *)
      eval t2 ((c, eval t1 e) :: e)

Great! Now we've built a simple calculator with let-expressions.

Monadizing it

As the calculator gets more complex though, it will become more tedious and unsatisfying to have all the clauses like:

    | Addition (t1,t2) -> (eval t1 e) + (eval t2 e)

have to explicitly pass around an environment that they're not themselves making any use of. Would there be any way to hide that bit of plumbing behind the drywall?

Yes! You can do so with a monad, in much the same way we did with our checks for divide-by-zero failures.

Here we'll use a different monad. It's called the Reader monad. We define it like this:

(* we assume we've settled on some implementation of the environment *)
type env = (char * int) list;;

(* here's the type of our Reader monad *)
type 'a reader = env -> 'a;;

(* here's our unit operation; it creates a reader-monad value that
   ignores the environment and returns x *)
let unit x = fun (e : env) -> x;;

(* here's our bind operation; how does it work? *)
let bind (u : 'a reader) (f: 'a -> 'b reader) : 'b reader =
    (* this can be written more compactly, but having it spelled out
       like this will be useful down the road *)
    fun (e : env) -> let a = u e in let u' = f a in u' e


(* we also define two special-purpose operations on our reader-monad values *)

(* evaluate a reader-monad value in a shifted environment; how does this work? *)
let shift (c : char) (v : int reader) (u : 'a reader) =
    fun (e : env) -> u ((c, v e) :: e)

(* lookup the value of c in the current environment
   this will fail if c isn't assigned anything by that environment
   a fuller solution would return an int option instead of
   returning an int or failing *)
let lookup (c : char) : int reader =
    fun (e : env) -> List.assoc c e

With this in hand, we can then do our calculator like this. Instead of ints, evaluating a term now returns an int reader, a monadic value of our new reader-monad type:

let rec eval (t : term) = match t with
      Constant x -> unit x
    | Multiplication (t1,t2) -> lift2 ( * ) (eval t1) (eval t2)
    | Addition (t1,t2) -> lift2 ( + ) (eval t1) (eval t2)
    | Variable c -> lookup c
    | Let (c,t1,t2) -> shift c (eval t1) (eval t2);;

Now if we try:

# let result = eval (Let('x',Constant 1,Addition(Variable 'x',Constant 2)));;
- : int reader = <fun>

How do we see what integer that evaluates to? Well, it's an int-Reader monad, which is a function from an env to an int, so we need to give it some env to operate on. We can give it the empty environment:

# result [];;
- : int = 3

Great! Now our calculator uses a monad, so it's much higher-tech.

Ummm...and why is this useful?

I guess you haven't you been paying close enough attention, or you don't have much practical experience in linguistics yet.

In Heim and Kratzer's textbook Semantics in Generative Grammar, the interpretation of complex phrases like [[interprets complex phrases]] are trees that look like this:

                VP
              /    \
             /      \
            /        \
           /          \
          /            \
         /              NP
        /              /  \
       /              /    \
       V             /      \
       |            /        \
[[interprets]]    AP         N
                  / \         |
              [[complex]] [[phrases]]

Now the normal way in which the nodes of such trees are related to each other is that the semantic value of a parent node is the result of applying the functional value of one of the daughter nodes to the value of the other daughter node. (The types determine which side is the function and which side is the argument.) One exception to this general rule concerns intersective adjectives. (How does [[complex]] combine with [[phrases]]?) We'll ignore that though.

Another exception is that Heim and Kratzer have to postulate a special rule to handle lambda abstraction. (This is their "Predicate Abstraction Rule.") Not only is it a special rule, but it's arguably not even a compositional rule. The basic idea is this. The semantic value of:

[[man who(i): Alice spurned i]]

is the result of combining [[man]], an e->t type predicate value, with the adjective-type value of [[who(i): Alice spurned i]]. As I said, we'll ignore complexities about their treatment of adjectives. But how is [[who(i): Alice spurned i]] derived? Heim and Kratzer say this is defined only relative to an assignment g, and it's defined to be a function from objects x in the domain to the value that [[Alice spurned i]] has relative to shifted assignment g {i:=x}, which is like g except for assigning object x to variable i. So this is not the result of taking some value [[who(i)]], and some separate value [[Alice spurned i]], and supplying one of them to the other as argument to function.

Getting any ideas?

Yes! We can in fact implement this as a Reader monad. And in doing so, we will get a value [[who(i)]] which is a function, and another value [[Alice spurned i]], to be its argument. So the semantics in the first place again becomes compositional, and in the second place doesn't need any special rule for how [[who(i): Alice spurned i]] is derived. It uses the same composition rule as other complex expressions.

How does this work?

We set [[i]] = the reader-monad value lookup i.

We set [[Alice]] = the reader-monad value unit Alice.

We have to lift the semantic values of predicates into the Reader monad. So if before we were taking the semantic value of "spurned" to be a function S of type e -> e -> t, now we set [[spurned]] = lift2 S.

Finally, we set [[who(i)]] to be:

fun (u : bool reader) (v : entity reader) ->
    fun (g : env) -> u (g {i:=v g})

That is, it takes as arguments a clause-type reader-monad u, and an entity-type reader-monad v, and returns a reader-monad that evaluates u in an environment that's modified to assign v's value in that environment to the variable i. In other words, this is:

fun (u : bool reader) (v : entity reader) -> shift i v u

You can trace through what happens then if we apply [[who(i)]] to [[Alice spurned i]]:

[[Alice spurned i]] = [[spurned]] [[i]] [[Alice]]
    = (lift2 S) (lookup i) (unit Alice)
    = bind (lookup i) (fun x -> bind (unit Alice) (fun y -> unit (S x y)))

because of the left-identity rule for unit, this is the same as:

    = bind (lookup i) (fun x -> unit (S x Alice))

Substituting in the definition of bind for the reader-monad, this is:

    = fun e -> (fun x -> unit (S x Alice)) (lookup i e) e
    = fun e -> unit (S (lookup i e) Alice) e

Substituting in the definition of unit, this is:

    = fun e -> S (lookup i e) Alice

And now supplying [[Alice spurned i]] as an argument to [[who(i)]], we get:

[[who(i): Alice spurned i]] = [[who(i)]] [[Alice spurned i]]
    = (fun u v -> shift i v u) (fun e -> S (lookup i e) Alice)
    = fun v -> shift i v (fun e -> S (lookup i e) Alice)

Substituting in the definition of shift, this is:

    = fun v -> (fun c v u e -> u ((c, v e) :: e)) i v (fun e -> S (lookup i e) Alice)
    = fun v -> (fun u e -> u ((i, v e) :: e)) (fun e -> S (lookup i e) Alice)
    = fun v -> (fun e -> (fun e -> S (lookup i e) Alice) ((i, v e) :: e))
    = fun v -> (fun e -> S (lookup i ((i, v e) :: e)) Alice)
    = fun v -> (fun e -> S (v e) Alice)

In other words, it's a function from entity-Reader monads to a function from assignment functions to the result of applying S to the value of that entity reader-monad under that assignment function, and to Alice. Essentially the same as Heim and Kratzer's final value, except here we work with monadic values, such as functions from assignments to entities, rather than bare entities. And our derivation is completely compositional and uses the same composition rules for joining [[who(i)]] to [[Alice spurned i]] as it uses for joining [[spurned]] to [[i]] and [[Alice]].

What's not to like?

Well, some of our semantic values here are assignment-shifters:

[[who(i)]] = fun u v -> shift i v u

and some philosophers count assignment-shifters as "monsters" and think there can't be any such thing. Well, everyone has their own issues they need to work through.

Later, techniques parallel to what we do here can be used to implement semantics for mutation and dynamic predicate logic. And then again, parallel techniques can be used to implement the "coordinated" semantics that Kit Fine and Jim Pryor favor. We just need different monads in each case.

Want more right now? Then let's look at doing the same thing for Intensionality.