Edit: Although I apparently missed the bullseye on a relevant issue, I think my answer is pretty good, so I keep it :-) (see below).
I suggest that a more concise way of phrasing might be the question: can a purely functional language compute something imperative?
First of all, suppose you take an imperative language such as C and make it impossible for you to change variables after they are defined. For example:.
int i; for (i = 0;
Well, here is your for loop. Do we need to maintain a while ?
while (i < 10)
Of course, but it is not very useful. i cannot change, so it will either start forever or not start at all.
How about recursion? Yes, you should keep recursion, and it is still useful:
int sum(int *items, unsigned int count) { if (count) {
Now, with functions, we do not change state, but variables can, well, change. As soon as the variable goes into our function, it is locked. However, we can call the function again (recursion), and this is like getting a new set of variables (the old ones remain unchanged). Although there are multiple instances of items and count , sum((int[]){1,2,3}, 3) will always be evaluated to 6 , so you can replace this with 6 if you want.
Can we do whatever we want? I am not 100% sure, but I think the answer is yes. Of course you can, if you have a closure.
Are you alright. The idea is that after defining a variable, it cannot be redefined. A referentially transparent expression, given the same variables, always gives the same result value.
I recommend looking at Haskell, a purely functional language. Strictly speaking, Haskell does not have an "assignment" operator. For example:
my_sum numbers = ??? where i = 0 total = 0
Here you cannot write a "for" loop that increases me and total as you go. However, all is not lost. Just use recursion to get new i and total s:
my_sum numbers = f 0 0 where fi total = if i < length numbers then fi' total' else total where i' = i+1 total' = total + (numbers !! i)
(Note that this is a dumb way to summarize a list in Haskell, but it demonstrates a method for solving a single task.)
Now consider this highly imperative code:
main = do a <- readLn b <- readLn print (a + b)
It is really syntactic sugar for:
main = readLn >>= (\a -> readLn >>= (\b -> print (a + b)))
The idea is that instead of the main one is a function consisting of a list of operators, main is an IO action performed by Haskell, and the actions are defined and connected together with the binding operations. In addition, an action that does nothing giving an arbitrary value can be determined using the return function.
Please note that snapping and returning do not apply to actions. They can be used with any type who calls himself a Monad to do all kinds of funky things.
To clarify, consider readLn . readLn is an action that, if executed, will read a string from standard input and display its parsed value. To do something with this value, we cannot store it in a variable, as this will violate referential transparency :
a = readLn
If allowed, the value will depend on the world and will be different every time we call readLn , that is, readLn will not be referentially transparent.
Instead, we bind the readLn action to a function that deals with the action, giving a new action, for example:
readLn >>= (\x -> print (x + 1))
The result of this expression is the value of the action. If Haskell got off the couch and performed this action, he would read the integer, increase it and print it. By associating the result of an action with a function that does something with the result, we get to maintain referential transparency while playing in the state world.