Higher order functions versus loops - runtime and memory efficiency?

Are higher order functions and Lambdas using uptime and memory efficiency better or worse? For example, to multiply all the numbers in a list:

nums = [1,2,3,4,5] prod = 1 for n in nums: prod*=n 

against

 prod2 = reduce(lambda x,y:x*y , nums) 

Does the HOF version have an advantage over the loop version other than smaller lines of code / uses a functional approach?

EDIT:

I cannot add this as an answer since I do not have the necessary reputation. I tied a loop and HOF approach to the profile using timeit, as suggested by @DSM

 def test1(): s= """ nums = [a for a in range(1,1001)] prod = 1 for n in nums: prod*=n """ t = timeit.Timer(stmt=s) return t.repeat(repeat=10,number=100) def test2(): s=""" nums = [a for a in range(1,1001)] prod2 = reduce(lambda x,y:x*y , nums) """ t = timeit.Timer(stmt=s) return t.repeat(repeat=10,number=100) 

And this is my result:

 Loop: [0.08340786340144211, 0.07211491653462579, 0.07162720686361926, 0.06593182661083438, 0.06399049758613146, 0.06605228229559557, 0.06419744588664211, 0.0671893658461038, 0.06477527090075941, 0.06418023793167627] test1 average: 0.0644778902685 HOF: [0.0759414223099324, 0.07616920129277016, 0.07570730355421262, 0.07604965128984942, 0.07547092059389193, 0.07544737286604364, 0.075532959799953, 0.0755039779810629, 0.07567424616704144, 0.07542563650187661] test2 average: 0.0754917512762 

On average, the loop approach is faster than using HOF.

+7
source share
2 answers

Higher order functions can be very fast.

For example, map(ord, somebigstring) much faster than understanding the equivalent list [ord(c) for c in somebigstring] . The first win for three reasons:

  • map () precedes the size of the result line to the length of some bigstring. In contrast, in order to understand the list, many realloc () calls must be made as they grow.

  • map () should perform only one search for ord, first checking the global variables, and then checking and finding it in the built-in ones. Understanding the list should repeat this work at each iteration.

  • The inner loop for the map runs at speed C. The body of the loop to understand the list is a series of pure Python steps, each of which must be sent or processed using the eval loop.

Below are some timings to confirm the forecast:

 >>> from timeit import Timer >>> print min(Timer('map(ord, s)', 's="x"*10000').repeat(7, 1000)) 0.808364152908 >>> print min(Timer('[ord(c) for c in s]', 's="x"*10000').repeat(7, 1000)) 1.2946639061 
+7
source

from my experimental cycles can be done very quickly, provided that they are not too deeply embedded and with complex higher mathematical operations, for simple operations and a single layer of cycles, this can be as fast as any other method, perhaps as long as whole numbers are used as an index for a loop or loops, it will actually depend on what you do too

It may well also be that a higher order function will produce as many cycles as the version of the cycle program, and maybe even a little slower, you will have to time them both ... just to be sure.

0
source

All Articles