Take a look at timeit , the python profiler, and pycallgraph .
timeit
def test(): """Stupid test function""" lst = [] for i in range(100): lst.append(i) if __name__ == '__main__': import timeit print(timeit.timeit("test()", setup="from __main__ import test"))
Essentially, you can pass python code as a string parameter, and it will execute the specified number of times and print the runtime. Important bits from the docs:
timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000)
Create an instance of Timer
with this statement, set the code and timer, and run its timeit
method using the number of executions.
... and:
Timer.timeit(number=1000000)
Execution of time numbers of the main operator. This performs the setup once, and then returns the time required to complete the main several times, measured in seconds as a float. An argument is the number of times through the loop, by default it is one million. The main operator, installation instruction, and timer function that will be used are passed to the constructor.
Note
By default, timeit
temporarily disables the garbage collection
during synchronization. The advantage of this approach is that it makes independent timings more comparable. This drawback is that HA can be an important component of a measurable function. If so, the GC can be turned on again as the first in the configuration line. For example:
timeit.Timer('for i in xrange(10): oct(i)', 'gc.enable()').timeit()
Profiling
Profiling will give you a more detailed idea of what is happening. Here's an “instant example” from official docs :
import cProfile import re cProfile.run('re.compile("foo|bar")')
What will give you:
197 function calls (192 primitive calls) in 0.002 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.001 0.001 <string>:1(<module>) 1 0.000 0.000 0.001 0.001 re.py:212(compile) 1 0.000 0.000 0.001 0.001 re.py:268(_compile) 1 0.000 0.000 0.000 0.000 sre_compile.py:172(_compile_charset) 1 0.000 0.000 0.000 0.000 sre_compile.py:201(_optimize_charset) 4 0.000 0.000 0.000 0.000 sre_compile.py:25(_identityfunction) 3/1 0.000 0.000 0.000 0.000 sre_compile.py:33(_compile)
Both of these modules should give you an idea of where to look for bottlenecks.
Also, to access profile
output, check out this post.
pycallgraph
This module uses graphviz to create callgraphs, such as:

You can easily see which paths used the most time by color. You can create them using the pycallgraph API or using a packed script:
pycallgraph graphviz -- ./mypythonscript.py
Overhead is quite significant. Therefore, for already long processes, creating a schedule may take some time.