Why does a functional programming language support automatic memory, but not imperative languages?

This is a question I read in some lectures on dynamic programming that I accidentally found on the Internet. (I graduated from high school, and I already know basic dynamic programming)

See the explanation of why memoization is needed, i.e.

// psuedo code int F[100000] = {0}; int fibonacci(int x){ if(x <= 1) return x; if(F[x]>0) return F[x]; return F[x] = fibonacci(x-1) + fibonacci(x-2); } 

If memoization is not used, then many subtasks will be recalculated many times, which makes the complexity very high.


Then on one page notes have an unanswered question, and that is exactly what I want to ask. Here I use the exact wording and examples that he shows:

Automatic memoization: many functional programming languages ​​(e.g. Lisp) have built-in memoization support.

Why not in imperative languages ​​(like Java)?

LISP note example (which it claims to be effective):

 (defun F (n) (if (<= n 1) n (+ (F (- n 1)) (F (- n 2))))) 

An example of Java that it provides (which, it claims, is exponential)

 static int F(int n) { if (n <= 1) return n; else return F(n-1) + F(n-2); } 

Before reading this, I don’t even know if there is built-in memoization support in some programming languages.

Is the requirement in the notes true? If so, why do not imperative languages ​​support it?

+7
java imperative-programming theory functional-programming lisp
source share
3 answers

The claims about LISP are very vague; they don’t even mention which dialect or implementation of LISP they mean. None of the LISP dialects I'm familiar with does automatic memoization, but LISP makes it easy to write a wrapper function that converts any existing function to memoized.

Fully automatic unconditional memoization would be a very dangerous practice and would lead to memory errors. In imperative languages, it would be even worse because return values ​​are often mutable and therefore cannot be reused. Imperative languages ​​usually do not support tail recursion optimization, which further reduces the applicability of memoization.

+5
source share

Memoization support is nothing more than having first-class features.

If you want to mark the Java version for one specific case, you can write it explicitly: create a hash table, check existing values, etc. Unfortunately, you cannot generalize this to memoize any function. Languages ​​with first-class functions make written functions and remember their almost orthogonal tasks.

The basic case is simple, but you have to consider recursive calls. In statically typed functional languages, such as OCaml, a function that is memoized cannot simply call itself recursively because it will invoke a non-memoized version. However, the only change in your existing function is to accept the function as an argument called, for example, self , which should be called whenever you want it to return. Then, the general storage means provides a corresponding function. A full example of this can be found in this answer .

The Lisp version has two functions that make memoizing an existing function even simpler.

  • You can manipulate functions like any other value
  • You can override functions at runtime

So, for example, in Common Lisp you define F :

 (defun F (n) (if (<= n 1) n (+ (F (- n 1)) (F (- n 2))))) 

Then you will see that you need to remember this function, so you load the library:

 (ql:quickload :memoize) 

... and you memoize F :

 (org.tfeb.hax.memoize:memoize-function 'F) 

The object accepts arguments to indicate which input should be cached and which test function to use. Then the function F is replaced with a new one, which introduces the necessary code to use the internal hash table. Recursive calls of F inside F now call the wrapping function, not the original one (you don’t even recompile F ). The only potential problem is that the original F was subject to tail call optimization. You should probably declare it notinline or use DEF-MEMOIZED-FUNCTION .

+3
source share

Although I'm not sure that the widely used Lisps supported automatic memoization, I think there are two reasons why memoization is more common in functional languages, and one more for Lisp-family languages.

First of all, people write functions in functional languages: calculations whose results depend only on their arguments and which do not affect the environment. Anything that does not meet this requirement is generally not amenable to memoirization. And, well, imperative languages ​​are only those languages ​​in which these requirements are not met or cannot be satisfied, because they would not be mandatory otherwise!

Of course, even in just user-friendly languages, such as (most) Lisps, you have to be careful: you probably shouldn't remember the following:

 (defvar *p* 1) (defun foo (n) (if (<= n 0) *p* (+ (foo (1- n)) (foo (- n *p*))))) 

Secondly, functional languages ​​usually want to talk about immutable data structures. This means two things:

  • It is actually safe to memoize a function that returns a large data structure
  • Functions that build very large data structures often need a huge amount of garbage because they cannot mutate intermediate structures.

(2) somewhat contradictory: the resulting wisdom is that GCs are now so good that this is not a problem, copying is very cheap, compilers can do magic, and so on. Well, people who wrote such functions will know that this is partly true: GCs are good, copying is cheap (but arrows that chase large structures to copy them are often very hostile to caches), but actually this is not enough (and compilers almost never don't do the magic they claim). Thus, you either cheat by resorting to non-functional code for free, or you memoize. If you memoize functions, then you only build all temporary structures once and everything becomes cheap (except in memory, but a suitable weakness in memoization can handle this).

Thirdly: if your language does not support light metalinguistic abstraction, this is a serious pain for implementing memoization. Or, to put it another way: you need Lisp macros.

To remember a function, you need to do at least two things:

  • You need to control which arguments are keys for memoization - not all functions have only one argument, and not all functions with several arguments should be stored in memory on the first;
  • You need to intervene inside the function to disable any self-grabbing optimization that completely destroys memoization.

Although it is so cruel because it is so simple, I will demonstrate it by ridiculing Python.

You might think that decorators are what you need to memoize functions in Python. Indeed, you can write memoizing tools using decorators (and I wrote them). And these are even peculiar works, although they do it mostly by accident.

To begin with, a decorator cannot easily know anything about the function that he decorates. Thus, you can either try memoize based on a tuple of all the arguments to the function, or specify in the decorator which arguments are memoize, or something equally grotty.

Secondly, the decorator receives a function that he decorates as an argument: he cannot quarrel inside. This is actually OK, because Python, as part of its concept of “no concepts invented after 1956,” of course, does not imply that calls to f lexically in the definition of f (and without intermediate bindings) are actually independent calls. But perhaps one day this will happen, and all your memories will now be broken.

So, in short: for a robust memoize function, you need Lisp-style macros. Probably the only imperative languages ​​that are are Lisp.

+1
source share

All Articles