How do you automate the launch / debugging of large-scale projects?

Scenario:

There is a complicated piece of software that annoys manual startup. What I did was create a python script to run the executable and connect gdb for debugging.

Running the script process:

  • sets the environment variable.
  • provides the addition of a local assembly directory to the environment variable LD_LIBRARY_PATH .
  • changes the current working directory to the object that the executable is waiting for (not my design)
  • runs an executable file with a configuration file with only a command line parameter
  • outputs the output from the executable to the second logging process
  • remembers the PID of the executable file, then launches and attaches gdb to the executable executable file.

The script works with one caveat. ctrl-c does not interrupt control of debuge and return in gdb. Therefore, if I โ€œcontinueโ€ without active control points, I can never stop the process again, it needs to be killed / aborted from another shell, BTW, run โ€œkill -s SIGINT <pid>โ€ where <pid> is the debuggee pid, which brings me back to the gdb hint ... but it is really annoying to do something this way.

At first I thought that Python captures the SIGINT signal, but it doesn't seem to be that way, as I set up signal handlers by passing the signal to the debut, and this does not fix the problem.

I tried various configurations for a python script (calling os.spawn * instead of a subprocess, etc.) It seems that any way I do it if python starts a child process, SIGINT (ctrl-c) SHOULD NOT be redirected to gdb or child process.

Current line of thinking

  • This may be due to the need for a separate process group identifier for debut and gdb ... any trust in this?
  • Possible error with SELinux?

Information:

  • gdb 6.8
  • Python 2.5.2 (the problem is present with Python 2.6.1)
  • SELinux environment (error signaling processes?)

Alternatives I reviewed:

  • Setting the .gdbinit file to do the same thing the script does, environment variables and the current working directory are a problem with this approach.
  • Running the executable and installing gdb manually (yuck)

Question: How to automate the launch / debugging of large-scale projects?

Update: I tried the Nicholas Riley examples below, on my Macintosh at home they all allow cntl-c to work to varying degrees, on the production of boxen (which I suppose SELinux can work), they donโ€™t ...

+4
source share
3 answers

Instead of forwarding the signal to debuggee from Python, you can simply ignore it. The following worked for me:

 import signal signal.signal(signal.SIGINT, signal.SIG_IGN) import subprocess cat = subprocess.Popen(['cat']) subprocess.call(['gdb', '--pid=%d' % cat.pid]) 

With this, I was able to reuse CC inside GDB and abort debuggee without any problems, however I saw some strange behavior.

By the way, I also had no problems sending the signal to the target process.

 import subprocess cat = subprocess.Popen(['cat']) import signal, os signal.signal(signal.SIGINT, lambda signum, frame: os.kill(cat.pid, signum)) subprocess.call(['gdb', '--pid=%d' % cat.pid]) 

So maybe something else is happening in your case? This can help if you post code that breaks.

+3
source

Your comment notices that you are sshing in with putty ... do you have a tty control command? With openssh you would like to add the -T option, I donโ€™t know how / if putty will do it the way you use it.

Also: you can try using cygwin ssh instead of putty.

0
source

if you already have the current script configured for this, but it has problems automating part of it, perhaps you can just grab the wait and use it to provide the setting, and then return to interactive mode, waiting for the process to start. Then you can still lock ctrl-c.

0
source

All Articles