How can I profile Python code line by line?

I used cProfile to profile my code and it worked perfectly. I also use gprof2dot.py to visualize the results (makes this a bit clearer).

However, cProfile (and most of the other Python profilers I've seen so far) seems to only profile at the function call level. This is confusing when certain functions are called from different places - I have no idea if call number 1 or call number 2 takes up most of the time. This gets even worse when the function in question has six depth levels called up from seven other places.

How do I get line profiling?

Instead of this:

function #12, total time: 2.0s 

I would like to see something like this:

 function #12 (called from somefile.py:102) 0.5s function #12 (called from main.py:12) 1.5s 

cProfile shows how much time the parent spends all the time, but again this connection is lost when you have several layers and interconnected calls.

Ideally, I would like to have a graphical interface that will analyze the data and then show me my source file with the total time specified for each line. Something like that:

 main.py: a = 1 # 0.0s result = func(a) # 0.4s c = 1000 # 0.0s result = func(c) # 5.0s 

Then I can click on the second call to "func (c)" to see what takes time in this call, separately from the call to "func (a)".

Does this make sense? Is there a profiling library that collects this type of information? Is there some awesome tool I missed?

+104
python profiling
Oct 13 '10 at 20:12
source share
4 answers

I believe what Robert Kern line_profiler is for . Link:

 File: pystone.py Function: Proc2 at line 149 Total time: 0.606656 s Line # Hits Time Per Hit % Time Line Contents ============================================================== 149 @profile 150 def Proc2(IntParIO): 151 50000 82003 1.6 13.5 IntLoc = IntParIO + 10 152 50000 63162 1.3 10.4 while 1: 153 50000 69065 1.4 11.4 if Char1Glob == 'A': 154 50000 66354 1.3 10.9 IntLoc = IntLoc - 1 155 50000 67263 1.3 11.1 IntParIO = IntLoc - IntGlob 156 50000 65494 1.3 10.8 EnumLoc = Ident1 157 50000 68001 1.4 11.2 if EnumLoc == Ident1: 158 50000 63739 1.3 10.5 break 159 50000 61575 1.2 10.1 return IntParIO 

Hope this helps!

+110
Oct 13 '10 at 20:19
source share

You can also use pprofile ( pypi ). If you want to profile all execution, this does not require modification of the source code. You can also profile a subset of a larger program in two ways:

  • switch profiling when a certain point in the code is reached, for example:

     import pprofile profiler = pprofile.Profile() with profiler: some_code # Process profile content: generate a cachegrind file and send it to user. # You can also write the result to the console: profiler.print_stats() # Or to a file: profiler.dump_stats("/tmp/profiler_stats.txt") 
  • asynchronously switch profiling from the call stack (you need a way to run this code in the application in question, for example, a signal handler or an available workflow) using statistical profiling:

     import pprofile profiler = pprofile.StatisticalProfile() statistical_profiler_thread = pprofile.StatisticalThread( profiler=profiler, ) with statistical_profiler_thread: sleep(n) # Likewise, process profile content 

The format of the code annotation output is very similar to the line profiler:

 $ pprofile --threads 0 demo/threads.py Command line: ['demo/threads.py'] Total duration: 1.00573s File: demo/threads.py File duration: 1.00168s (99.60%) Line #| Hits| Time| Time per hit| %|Source code ------+----------+-------------+-------------+-------+----------- 1| 2| 3.21865e-05| 1.60933e-05| 0.00%|import threading 2| 1| 5.96046e-06| 5.96046e-06| 0.00%|import time 3| 0| 0| 0| 0.00%| 4| 2| 1.5974e-05| 7.98702e-06| 0.00%|def func(): 5| 1| 1.00111| 1.00111| 99.54%| time.sleep(1) 6| 0| 0| 0| 0.00%| 7| 2| 2.00272e-05| 1.00136e-05| 0.00%|def func2(): 8| 1| 1.69277e-05| 1.69277e-05| 0.00%| pass 9| 0| 0| 0| 0.00%| 10| 1| 1.81198e-05| 1.81198e-05| 0.00%|t1 = threading.Thread(target=func) (call)| 1| 0.000610828| 0.000610828| 0.06%|# /usr/lib/python2.7/threading.py:436 __init__ 11| 1| 1.52588e-05| 1.52588e-05| 0.00%|t2 = threading.Thread(target=func) (call)| 1| 0.000438929| 0.000438929| 0.04%|# /usr/lib/python2.7/threading.py:436 __init__ 12| 1| 4.79221e-05| 4.79221e-05| 0.00%|t1.start() (call)| 1| 0.000843048| 0.000843048| 0.08%|# /usr/lib/python2.7/threading.py:485 start 13| 1| 6.48499e-05| 6.48499e-05| 0.01%|t2.start() (call)| 1| 0.00115609| 0.00115609| 0.11%|# /usr/lib/python2.7/threading.py:485 start 14| 1| 0.000205994| 0.000205994| 0.02%|(func(), func2()) (call)| 1| 1.00112| 1.00112| 99.54%|# demo/threads.py:4 func (call)| 1| 3.09944e-05| 3.09944e-05| 0.00%|# demo/threads.py:7 func2 15| 1| 7.62939e-05| 7.62939e-05| 0.01%|t1.join() (call)| 1| 0.000423908| 0.000423908| 0.04%|# /usr/lib/python2.7/threading.py:653 join 16| 1| 5.26905e-05| 5.26905e-05| 0.01%|t2.join() (call)| 1| 0.000320196| 0.000320196| 0.03%|# /usr/lib/python2.7/threading.py:653 join 

Please note that since pprofile does not rely on code modification, it can profile top-level module operators, allowing you to profile the time the program starts (how long it takes to import the modules, initialize global variables, ...).

It can generate cachegrind output, so you can use kcachegrind to conveniently view large results.

Disclosure: I am the author of the profile.

+38
Feb 02 '15 at 8:10
source share

PyVmMonitor has a live view that can help you there (you can connect to a running program and get statistics from it).

See: http://www.pyvmmonitor.com/

+1
Apr 28 '15 at 23:12
source share

You can use the help of the line_profiler package for this

1. First install the package:

  pip install line_profiler 

2. Use the magic command to load the package into your Python / Notebook environment.

  %load_ext line_profiler 

3. If you want to profile codes for a function, then
do the following: % lprun -f function_name function_name

  %lprun -f function_defined_by_you function_defined_by_you(arg1, arg2) 

YOU WILL GET AN EXCELLENT FORMATED EXIT WITH ALL DETAILS IF YOU FOLLOW ABOVE THE ABOVE STEPS

+1
Jun 28 '19 at 6:03
source share



All Articles