You can build a massive hack for this with longjmp and on_exit , although I highly recommend avoiding this in favor of a multi-process solution, which I will discuss later in response.
Suppose we have the following (design-broken) header file:
#ifndef TEST_H #define TEST_H #include <stdlib.h> inline void fail_test(int fail) { if (fail) exit(fail); } #endif//TEST_H
We want to wrap it and convert the exit() call to a Python exception. One way to achieve this is with something like the following interface, which uses %exception to insert C code around the call into each C function from your Python interface:
%module test %{ #include "test.h" #include <setjmp.h> static __thread int infunc = 0; static __thread jmp_buf buf; static void exithack(int code, void *data) { if (!infunc) return; (void)data; longjmp(buf,code); } %} %init %{ on_exit(exithack, NULL); %} %exception { infunc = 1; int err = 0; if (!(err=setjmp(buf))) { $action } else { // Raise exception, code=err PyErr_Format(PyExc_Exception, "%d", err); infunc = 0; on_exit(exithack, NULL); SWIG_fail; } infunc = 0; } %include "test.h"
This "works" when compiling:
swig3.0 -python -py3 -Wall test.i gcc -shared test_wrap.c -o _test.so -I/usr/include/python3.4 -Wall -Wextra -lpython3.4m
And we can demonstrate it with
Python 3.4.2 (default, Oct 8 2014, 13:14:40) [GCC 4.9.1] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import test >>> test.fail_test(0) >>> test.fail_test(123) Traceback (most recent call last): File "<stdin>", line 1, in <module> Exception: 123 >>> test.fail_test(0) >>> test.fail_test(999) Traceback (most recent call last): File "<stdin>", line 1, in <module> Exception: 999 >>>
This is very ugly, although certainly not portable, and most likely undefined behavior.
My advice will not do this and use a solution with two processes reporting instead. We can still help SWIG create a good module and better, but we can rely on some high-level Python constructs to help us with this. A complete example looks like this:
%module test %{ #include "test.h" static void exit_handler(int code, void *fd) { FILE *f = fdopen((int)fd, "w"); fprintf(stderr, "In exit handler: %d\n", code); fprintf(f, "(dp0\nVexited\np1\nL%dL\ns.", code); fclose(f); } %} %typemap(in) int fd %{ $1 = PyObject_AsFileDescriptor($input); %} %inline %{ void enter_work_loop(int fd) { on_exit(exit_handler, (void*)fd); } %} %pythoncode %{ import os import pickle serialize=pickle.dump deserialize=pickle.load def do_work(wrapped, args_pipe, results_pipe): wrapped.enter_work_loop(results_pipe) while True: try: args = deserialize(args_pipe) f = getattr(wrapped, args['name']) result = f(*args['args'], **args['kwargs']) serialize({'value':result},results_pipe) results_pipe.flush() except Exception as e: serialize({'exception': e},results_pipe) results_pipe.flush() class ProxyModule(): def __init__(self, wrapped): self.wrapped = wrapped self.prefix = "_worker_" def __dir__(self): return [x.strip(self.prefix) for x in dir(self.wrapped) if x.startswith(self.prefix)] def __getattr__(self, name): def proxy_call(*args, **kwargs): serialize({ 'name': '%s%s' % (self.prefix, name), 'args': args, 'kwargs': kwargs }, self.args[1]) self.args[1].flush() result = deserialize(self.results[0]) if 'exception' in result: raise result['exception'] if 'exited' in result: raise Exception('Library exited with code: %d' % result['exited']) return result['value'] return proxy_call def init_library(self): def pipes(): r,w=os.pipe() return os.fdopen(r,'rb',0), os.fdopen(w,'wb',0) self.args = pipes() self.results = pipes() self.worker = os.fork() if 0==self.worker: do_work(self.wrapped, self.args[0], self.results[1]) %} // rename all our wrapped functions to be _worker_FUNCNAME to hide them - we'll call them from within the other process %rename("_worker_%s") ""; %include "test.h" %pythoncode %{ import sys sys.modules[__name__] = ProxyModule(sys.modules[__name__]) %}
Which uses the following ideas:
- Pickle to serialize data before piping it into a workflow.
os.fork , to create a workflow, os.fdopen create a more convenient object for use in Python- Extended SWIG renaming to hide the actual functions that we wrapped with module users, but still wrap them
- The trick will replace the module with a Python object that implements
__getattr__ to return proxy functions for the workflow __dir__ to support TAB work inside ipythonon_exit to intercept the output (but not redirect it) and report the code back through a pre-written ASCII pickled object
You can make the library_init call transparent and automatic if you want. You also need to handle the case when the employee was not started or has already come out better (it will simply be blocked in my example). And you will also need to make sure that the employee will be cleaned when exiting correctly, but now he allows you to run:
Python 3.4.2 (default, Oct 8 2014, 13:14:40) [GCC 4.9.1] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import test >>> test.init_library() >>> test.fail_test(2) In exit handler: 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/lislan/ajw/code/scratch/swig/pyatexit/test.py", line 117, in proxy_call if 'exited' in result: raise Exception('Library exited with code: %d' % result['exited']) Exception: Library exited with code: 2 >>>
and still be (somewhat) portable, but definitely defined.