Well, it wasn’t easy, but I think I understood it a little :) I went through a bunch of unsuccessful attempts (published here ); corresponding code below.
Basically the problem of the “next / step to breakpoint” is how to determine if you are “on” a breakpoint or not if the debugger is stopped (in step). Notice that I am using GDB 7.2-1ubuntu11 (current for Ubuntu 11.04). So it was like this:
- For the first time I found about "Convenience Variables" , and given that there are program counters and such available, there must be some convenient GDB variable that gives the status of a "breakpoint" and can be used directly in the GDB script. Looking through the GDB link index , I just cannot find such variables (my attempts are in nub.gdb )
- In the absence of such an internal variable "breakpoint status" - the only thing left to do is to capture the output of the command line ("stdout") GDB (in response to commands) as a string and analyze it (looking for a "breakpoint")
- Then I learned about the Python API for GDB and the
gdb.execute("CMDSTR", toString=True) command gdb.execute("CMDSTR", toString=True) , which, apparently, is exactly what is needed to capture the output: "By default, any output created by the command is sent to standard gdb output. If the to_string parameter is True, the output will be compiled by gdb.execute and returned as string [ 1 ] "!- So, first I tried to create a script ( pygdb-nub.py , gdbwrap ) that would use
gdb.execute in the recommended order; failed - because of this: - Then I thought that I would use the python script to
subprocess.Popen GDB program, replacing it with stdin and stdout; and then continue to control GDB ( pygdb-sub.py ) - this also did not work ... (apparently because I did not redirect stdin / out of right) - Then I thought that I would use python scripts to call from GDB (via
source ), which would be internally forked in pty when gdb.execute had to be called to capture its output ( pygdb-fork.gdb , pygdb-fork.py ) ... It almost worked - as the rows are returned; however, GDB notices that something is wrong: "[tcsetpgrp failed in terminal_inferior: Operation not allowed]", and the subsequent return lines do not seem to change.
And finally, the approach that worked: temporarily redirects GDB output from gdb.execute to a log file in RAM (Linux: /dev/shm ); and then by reading it, smoothing it and printing it from python - python also handles a simple while loop, which until the breakpoint is reached.
The irony is that most of these errors that caused this decision by redirecting the log file are actually recently recorded in SVN; that they will be distributed on distributions in the near future, and it will be possible to use gdb.execute("CMDSTR", toString=True) directly gdb.execute("CMDSTR", toString=True) : / Nevertheless, since I cannot risk building GDB from the source right now (and, possibly, with possible new incompatible ones), this is good enough for me as well :)
Here are the relevant files (partially also in pygdb-fork.gdb , pygdb-fork.py ):
pygdb-logg.gdb :
# gdb script: pygdb-logg.gdb # easier interface for pygdb-logg.py stuff # from within gdb: (gdb) source -v pygdb-logg.gdb # from cdmline: gdb -x pygdb-logg.gdb -se test.exe # first, "include" the python file: source -v pygdb-logg.py # define shorthand for nextUntilBreakpoint(): define nub python nextUntilBreakpoint() end # set up breakpoints for test.exe: b main b doFunction # go to main breakpoint run
pygdb-logg.py :
# gdb will 'recognize' this as python # upon 'source pygdb-logg.py' # however, from gdb functions still have # to be called like: # (gdb) python print logExecCapture("bt") import sys import gdb import os def logExecCapture(instr): # /dev/shm - save file in RAM ltxname="/dev/shm/c.log" gdb.execute("set logging file "+ltxname) # lpfname gdb.execute("set logging redirect on") gdb.execute("set logging overwrite on") gdb.execute("set logging on") gdb.execute(instr) gdb.execute("set logging off") replyContents = open(ltxname, 'r').read() # read entire file return replyContents # next until breakpoint def nextUntilBreakpoint(): isInBreakpoint = -1; # as long as we don't find "Breakpoint" in report: while isInBreakpoint == -1: REP=logExecCapture("n") isInBreakpoint = REP.find("Breakpoint") print "LOOP:: ", isInBreakpoint, "\n", REP
Basically, pygdb-logg.gdb loads the pygdb-logg.py python script, sets the alias nub for nextUntilBreakpoint and initializes the session - everything else is handled by the python script. And here is an example session - regarding the test source in the OP:
$ gdb -x pygdb-logg.gdb -se test.exe ... Reading symbols from /path/to/test.exe...done. Breakpoint 1 at 0x80483ec: file test.c, line 14. Breakpoint 2 at 0x80483c7: file test.c, line 7. Breakpoint 1, main () at test.c:14 14 count = 1; (gdb) nub LOOP:: -1 15 count += 2; LOOP:: -1 16 count = 0; LOOP:: -1 19 doFunction(); LOOP:: 1 Breakpoint 2, doFunction () at test.c:7 7 count += 2; (gdb) nub LOOP:: -1 9 count--; LOOP:: -1 10 } LOOP:: -1 main () at test.c:20 20 printf("%d\n", count); 1 LOOP:: -1 21 } LOOP:: -1 19 doFunction(); LOOP:: 1 Breakpoint 2, doFunction () at test.c:7 7 count += 2; (gdb)
... just like I wanted it: P I just don’t know how reliable it is (and whether it can be used in avr-gdb , for which I need it :) EDIT: the version of avr -gdb in Ubuntu 11.04 is currently 6.4 that the python command does not recognize :()
Ok, hope this helps someone
Hurrah!
Here are some links: