Why are pure functions faster in Mathematica code?

Possible duplicate:
Performance difference between functions and pattern matching in Mathematica

I often find widespread use of pure functions in many of the answers posted here, and often these solutions are much faster than using named templates, etc. Why is this so? Why are pure functions faster than others? Is this because mma should do less work?

+8
wolfram-mathematica
source share
4 answers

First, consider some sample tests:

In[100]:= f[x_]:=x^2; In[102]:= Do[#^2&[i],{i,300000}]//Timing Out[102]= {0.406,Null} In[103]:= Do[f[i],{i,300000}]//Timing Out[103]= {0.484,Null} In[104]:= Do[Function[x,x^2][i],{i,300000}]//Timing Out[104]= {0.578,Null} 

Pure functions are often (much) faster for two reasons. Firstly, anonymous pure functions (those defined in the slots - # and & ) do not need to resolve name conflicts for variable names. Therefore, they are somewhat faster than those defined by the template, where some name resolution takes place. But you see that pure functions with named variables are actually slower, and not faster than those defined by the template. I can assume that this is due to the fact that they should also resolve possible conflicts within their body, while based on the rules, such conflicts are ignored. Otherwise, the speed difference is about 10-20%.

Another, and much more dramatic difference is that they are used in functions such as Map, Scan, Table, etc., because the latter are automatically compiled into large numerical (packed) lists. But while pure functions can often be compiled, principally determined by templates cannot, therefore this increase in speed is not available to them. For example:

 In[117]:= ff[x_] := Sqrt[x]; In[116]:= Map[Sqrt[#] &, N@Range[100000]]; // Timing Out[116]= {0.015, Null} In[114]:= Map[ff, N@Range[100000]]; // Timing Out[114]= {0.094, Null} 
+8
source share

A pure function has several advantages: - The result can be cached. - Calculation can be safely parralelized. - In some cases, the result can be calculated at compile time (CTFE), and the function is never executed at the end. - Since the outer region does not change, you do not need to pass all the arguments with a copy.

So, if the compiler is able to control the optimization regarding these functions, your program will be faster. Whatever the language.

0
source share

in fact, pattern matching is usually faster than Function[{u},...] constructs and as fast as #& constructs (ignoring the compilation possibility, which has become more exciting in mma 8).

To see this, define a function at the time of the short code snippets:

 SetAttributes[timeshort, HoldAll]; timeshort[expr_] := Module[{t = Timing[expr;][[1]], tries = 1}, While[t < 1., tries *= 2; t = Timing[Do[expr, {tries}];][[1]]]; Return[t/tries]] 

try the following:

 ClearAll[f]; f[x_] := Cos[x] Trace[f[5.]] f[5] // timeshort ClearAll[f]; f = Function[x, Cos[x]] Trace[f[5.]] f[5] // timeshort ClearAll[f]; f = Cos[#] & Trace[f[5.]] f[5] // timeshort 

which give

 {f[5.],Cos[5.],0.283662} 8.45641\[Times]10^-7 Function[x,Cos[x]] {{f,Function[x,Cos[x]]},Function[x,Cos[x]][5.],Cos[5.],0.283662} 1.51906\[Times]10^-6 Cos[#1]& {{f,Cos[#1]&},(Cos[#1]&)[5.],Cos[5.],0.283662} 8.04602\[Times]10^-7 

that is, pattern matching and #& faster than Function . I have no idea why.

EDIT: Suppose I had to check the previously proposed question that was asked earlier ... See here essentially the same answer as me, and read the comments too for some interesting discussion.

0
source share

Yes. This means that he should never copy many things, because a pure function cannot change it. List processing can cleanly take a certain number of copies, but usually functional algorithms are designed to be efficient. Once the material compiles, the imperative code will be just as fast (almost always), but for interpreted Mathematica bytecode, clean is usually fast.

-one
source share

All Articles