Optimizing recursive brute force into a more mathematical / linear solution

I wrote this Haskell program to solve Euler 15 (it uses very simple dynamic programming to run faster, so I can run it, but by deleting it, you expect it to work in O(2^n) .

 -- Starting in the top left corner of a 2Γ—2 grid, and only being able to move to -- the right and down, there are exactly 6 routes to the bottom right corner. -- -- How many such routes are there through a 20Γ—20 grid? calcPaths :: Int -> Integer calcPaths s = let go xy | x == 0 || y == 0 = 1 | x == y = 2 * go x (y - 1) | otherwise = go (x - 1) y + go x (y - 1) in go ss 

Later I realized that this can be done in O(n) , translating it into an equation and thinking about it a little longer, realized that it actually looks like my solution above, except for the recursion (which is slow on our equipment) is replaced mathematics representing the same thing (which is fast on our equipment).

equivalent equation

Is there a systematic way to perform such optimization (to produce and prove an equation to fit recursion) on recursive sets of expressions, in particular one that could be realistically β€œtaught” to the compiler so this reduction is performed automatically?

+8
compiler-optimization algorithm haskell dynamic-programming mathematical-optimization
source share
3 answers

You can also use dynamic programming if the problem is divided into smaller subtasks, for example,

F (x, y) = F (x-1, y) + F (x, y-1)

here F (x, y) is divided into smaller subtasks, so DP can be used

 int arr[xmax+1][ymax+1]; //base conditions for(int i=0;i<=xmax;i++) arr[i][0] = 1 for(int j=0;j<=ymax;j++) arr[0][j] = 1; // main equation for(int i=1;i<=xmax;i++) { for(int j=1;j<=ymax;j++) { arr[i][j] = arr[i-1][j] + arr[i][j-1]; } } 

As you mentioned, the DP compiler optimizer can be used for this, you just need to write instructions in the compiler, which on a recursive solution checks if it is divided into smaller subtasks, if so, then use DP with a simple loop like above, but the most difficult part is optimization automatically, for example, over DP it needs the O(xmax*ymax) , but can be easily optimized to get a solution in the O(xmax+ymax) space

Solver example: http://www.cs.unipr.it/purrs/

+2
source share

Unfortunately, I cannot say much about analytical algorithmic optimizations, but in practice there is a useful dynamic programming method called memoization . For example, Memoize library your code can be rewritten as

 import Data.Function.Memoize calcPaths s = let go fxy | x == 0 || y == 0 = 1 | x == y = 2 * fx (y - 1) | otherwise = f (x - 1) y + fx (y - 1) in memoFix2 go ss 

therefore, the go function will be evaluated only once for any combination of arguments.

+4
source share

It also seems like a philosophical question. It seems that you are asking the compiler to admit that you want a more efficient (faster? Using less resources?) Process to return the value of the function call (and not the most efficient way to execute your code).

Continuing the idea, we could have a compiler giving suggestions in this case of mathematical formulas that could more clearly or efficiently overtake the code; alternatively, the compiler can choose an Internet connection and perform another calculation (for example, google or wolfram); in a pinch, the compiler might find out that the best thing to do at this time is not to answer Euler Project 15, but a chocolate cake recipe or instructions for fixing home heating.

The question makes me think about artificial intelligence and the role of the computer (how much mathematics should your computer do for you, how much should it closely monitor the code?). However, such an optimization should be an interesting project to think about.

0
source share

All Articles