# Functional Programming

Many people bemoan the difficulties of programming in the multi-core era. What many programmers do not realize is that a way out of deadlock/race condition hell is known — it’s name is “Functional Programming.” This term should not be contrasted to “Object-Oriented Programming,” rather it should be contrasted to “Imperative Programming.” The idea in functional programming is that variables cannot change value, they are set when defined and can never receive an update. Languages like C, Java, and Python are “Imperative” because the contents of variables can be modified.

A method written in a functional program is said to be “Referentially Transparent” — in other words, if you call the method twice with the same arguments it will produce the same value. In fact, the second call can simply be optimized away by re-using the first value. Thus:
``` x = sin(3.0) y = sin(3.0) ```

Is the same as
``` x = sin(3.0) y = x ```

if the function “sin” is referentially transparent (and it is). If I have a functional program, then, I can take any two method calls that do not depend on each other and run them in the same thread
``` x = call1(3,4,"x") y = call2(5,9,8) z = x + y ```

It would be perfectly safe, in a functional world, for call1 and call2 to execute simultaneously as they could not possibly affect each other. The computation of z could block until both calls finished, and no deadlock or race condition could occur.

But, as you have probably guessed, there are some difficulties programming in a functional language. We’ve come to count on loops, incrementing values, etc. How can we get things done if we don’t have them?

The answer is that there’s a simple way to transform any imperative code to functional code — just introduce more functions and more recursion.
``` func() { x = 3 x = x + 4 return x } ```
could be re-written as
``` func_1(x) { return x + 4 } func() { x = 3 return func_1(x) } ```
Of course, these two methods cannot be called in parallel since one depends on the other. In some sense, however, that’s not the point. The point is that if we program in this fashion we can easily identify regions of code that can execute safely in parallel (i.e. without deadlock or race).

One fly in the ointment is that methods which do IO (i.e. print to the screen, read or write a file, etc.) cannot safely be done in parallel with each other. If two threads are modifying the state of a single file then we have brought back race conditions. The simplest answer is to put all IO operations on a single thread. This is somewhat limiting, as many scientific applications rely on the speed they can get doing parallel reads or writes of large quantities of data.

An alternative would be to segment IO operations into a parallel read phase and a parallel write phase on sets of distinct files. This would address the needs of certain scientific applications, but it might not be so great for applications like web servers which want to continuously read and write to many clients. A web server, however, might be able to launch numerous methods that read from a socket and write a response back. They could be viewed as autonomous small programs and could be run safely in parallel so long as they did not try to communicate data back to their parent thread.

Between these few simple rules then, it should be possible to construct a functional language that would allow the “average programmer” to take great advantage of multi-core computers — without the pain of debugging race conditions and deadlocks. Many languages do exist that are functional, but what is lacking (as far as I can see) is a simple language with syntax similar to python or java to bring this discovery of computer science to the masses.

This entry was posted in Programming. Bookmark the permalink.