There's a lot more trivia and links here than anyone needs to know for this seminar. It's there for anyone who may be interested.

Others (and ourselves) will often talk about "functional programming languages." But it would be more appropriate to talk of functional paradigms or programming patterns. Most programming languages are hybrids that allow programmers to use any of several programming paradigms. The ones that get called "functional languages" are just ones that give functional paradigms a central place in their design, and make them very easy to use.

We can divide functional languages into two classes: those that are dynamically typed and those that are statically typed.

The dynamically typed languages give types more of a background role in the program. They include the Lisp family (which in turn includes all the variants of Scheme, and also Common Lisp, and Clojure). They also include Erlang and Joy and Pure, and others.

Although these languages are hospitable to functional programming, some of them also permit you to write imperatival code (that is, code with side-effects) too. In Scheme, by convention, imperatival functions are named ending with a "!". So (set-car! p 1) is a Scheme expression that, when evaluated, mutates the pair p so that its first member changes to 1. For our purposes though, we'll mostly be working with the parts of Scheme that are purely functional. We'll be discussing the difference between functional and imperatival programming a lot during the seminar.

We're using Scheme in parallel with our discussion of untyped lambda calculi. Scheme isn't really untyped. If you assign a string to variable x and then try to add x to 1, Scheme will realize that strings aren't the right type of value to add to integers, and will complain about it. However, Scheme will complain about it at runtime: that is, at the point when that particular instruction is about the be executed. This is what's meant by calling these languages "dynamically typed."

In practice, dynamically typed languages allow the programmer to be more relaxed about the types of the values they're manipulating. For instance, it's trivial to create a list whose first member is a string, whose second member is an integer, and so on. You just have to keep track somehow so that you don't try doing anything with values of the wrong type, else you'll get an error at runtime.

The other large class of languages are statically typed. This means that typing information is checked at compile time: that is, when you're converting your source code into a file that your computer knows how to directly execute. If you make type mistakes---for instance, you try to add a string to an integer---the compiler will choke on this so you never get to the point of even trying to run the program. Once you finally do get the program to compile, you can be more confident that errors of that sort have all been eliminated. They can't sneak up to bite you unawares while the program is running.

Formerly, static typing required the programmer to add lots of annotations in her source code explicitly specifying what the type of each function argument is, what the type of the function's return value was, and so on. This is tedious, and partly for this reason dynamically typed languages have become popular and are thought of as easier to work with. However, nowadays statically typed languages tend to use "type inference": that is, you can let the computer do most of the work of figuring out what the types of everything are. For example, if you define a function like this:

let foo x y = (1 + x, sqrt(y) )

the compiler will see that its first argument x is added to an integer, so it also needs to be an integer; and (supposing sqrt is known independently to take floating point arguments), foo's second argument y needs to be a floating point value. What's more, foo returns a pair whose first member is of type int and second member is of type float. The compiler can figure this out all on its own. The programmer doesn't have to explicitly spell it out, though she could, like follows:

let foo (x:int) (y:float) : int*float = (1 + x, sqrt(y) )

This just says explicitly that foo takes an argument x of type int, an argument y of type float, and returns a pair of type int*float (that is, a pair whose first member is of type int and whose second member is of type float).

Type inference allows programmers to enjoy the benefits of strict compile-time type-checking, which as we said, helps eliminate a large class of errors at a more convenient time---when the program is being developed, rather than after it's been deployed and is running. While making life easier for the programmer, so that they don't have to explicitly write out the types of everything. (However, if a programmer doesn't keep track of types at least in her head, she's going to get into trouble and will face compile-time errors until she gets everything sorted out.)

Though as we said dynamically-typed languages have become popular, programmers who get used to modern statically-typed languages find them productive. Sometimes they become zealots for working this way instead; in any case, they say that the popular dim opinion of static typing is based on out-of-date experiences of older languages like C and Java.

Most functional programming languages these days are statically typed.

We can further divide these languages based on whether they use lazy or strict/eager evaluation. We'll discuss the difference between these during the seminar.

Most programming languages, functional or not, use strict/eager evaluation. For instance, languages of the ML family are all statically-typed functional languages with strict/eager evaluation. These include SML and Caml and Nemerle. SML in turn has several variants or implementations: MLton, SML/NJ, Moscow ML, and Mythryl. Microsoft's F# is derived from Caml.

Other statically-typed functional languages with strict/eager evaluation are Scala and Coq. Like Scheme, many of these languages permit imperatival as well as functional coding; but they are regarded as functional programming languages because they are so hospitable to functional programming, and give it a central place in their design.

A few languages such as Miranda and Haskell are statically-typed languages that instead mostly use lazy evaluation. However, it'd be more strictly accurate to say Haskell is lazy by default. You can also make Haskell evaluate some expressions strictly/eagerly; you just have to ask for that explicitly. (I don't know about Miranda.) Languages like OCaml are the reverse: they're strict/eager by default, but you can get lazy evaluation where it's needed, you just have to ask for it explicitly.

Unlike OCaml, Haskell is much more extreme about being non-imperatival. Though it's possible to write imperative code in Haskell, too; one just has to encapsulate it in an "ST" or "IO" monad. We'll be discussing in the seminar how monads can be used to create purely functional representations of imperatival algorithms. (You can do the same in languages like Scheme and OCaml, too. What's different is that in Haskell monads are the only way to deal with imperatival code.)

We'll talk much more about monads, lazy vs strict evaluation, and functional vs imperatival code as we proceed.

We won't much discuss static vs dynamic typing; this has to do with lower-level implementation details than we'll be concerned with. However, you'll encounter the difference in practice as you work with Scheme and OCaml, respectively; and you'll see it referred to as you read around. So it's good for you to have placed it in your mental map.