First (prototype), then some important reservations.
# process.py import sys import pdb import handlers def process_unit(data_unit): global handlers while True: try: data_type = type(data_unit) handler = handlers.handler[data_type] handler(data_unit) return except KeyError: print "UNUSUAL DATA: {0!r}". format(data_unit) print "\n--- INVOKING DEBUGGER ---\n" pdb.set_trace() print print "--- RETURNING FROM DEBUGGER ---\n" del sys.modules['handlers'] import handlers print "retrying" process_unit("this") process_unit(100) process_unit(1.04) process_unit(200) process_unit(1.05) process_unit(300) process_unit(4+3j) sys.exit(0)
and
# handlers.py def handle_default(x): print "handle_default: {0!r}". format(x) handler = { int: handle_default, str: handle_default }
In Python 2.7, this gives you a dictionary linking expected / known types to functions that handle each type. If a handler is not available for the type, the user is dropped into the debugger, giving them the opportunity to modify the handlers.py file using the appropriate handlers. In the above example, there is no float or complex value handler. When they appear, the user will need to add the appropriate handlers. For example, you can add:
def handle_float(x): print "FIXED FLOAT {0!r}".format(x) handler[float] = handle_float
And then:
def handle_complex(x): print "FIXED COMPLEX {0!r}".format(x) handler[complex] = handle_complex
Here is what it will look like:
$ python process.py handle_default: 'this' handle_default: 100 UNUSUAL DATA: 1.04 --- INVOKING DEBUGGER --- > /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit() -> print (Pdb) continue --- RETURNING FROM DEBUGGER --- retrying FIXED FLOAT 1.04 handle_default: 200 FIXED FLOAT 1.05 handle_default: 300 UNUSUAL DATA: (4+3j) --- INVOKING DEBUGGER --- > /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit() -> print (Pdb) continue --- RETURNING FROM DEBUGGER --- retrying FIXED COMPLEX (4+3j)
Good, so it basically works. You can improve and customize it in a more production-ready form, which makes it compatible with Python 2 and 3, etc.
Please think long and hard before doing it this way.
This “real-time code change” approach is an incredibly fragile and error-prone approach. It encourages you to make hot fixes in real time at the very last moment. These fixes will probably not have good or sufficient testing. Almost by definition, you have just discovered that you are dealing with a new type T. You still know little about T, why this happened, what could be the cases of its edges and failures, etc. And if your “fix” code or hot fix does not work, then what? Sure, you can add a few exception handlers, catch more exception classes, and possibly continue.
Web structures like Flask have debugging modes that work mostly this way. But these are debugging modes and are generally not suitable for production. Moreover, what if you enter the wrong command in the debugger? Accidentally type "quit", not "continue", and the whole program will end, and with it your desire to keep the processing alive. If it is necessary for use in debugging (perhaps to learn new types of data streams), follow these steps.
If this is used for production, consider instead a strategy that allocates raw types for asynchronous, out-of-band exam and correction, rather than putting a developer / operator in the middle of a real-time processing flow.