The general idea is good, but a Python / GUI session may not be as responsive as long as the background import thread is removed; unfortunately, import inherently and inevitably “blocks” Python significantly (this is not only GIL, there is a special additional lock for import).
It is still worth a try, as it can make things a little better - it is also very simple, since Queue are thread safe and, apart from Queue put and get , all you need is basically __import__ . However, do not be surprised if this does not help, and you still need additional skill.
If you have a disk that is inherently very fast, but with limited space, for example, a "RAM disk" or especially bright, solid-state, it might be worth saving the necessary packages in .tar.bz2 (or another archive form) and unpack it to a fast disk when the program starts (which, in fact, is just an I / O, and therefore it will not block the situation much - I / O operations quickly release the GIL), and it is also especially easy to delegate to a subprocess running tar xjf or the like )
If some import slowness is associated with a huge number of .py/.pyc/.pyo , you should try to save them (only in the form of .pyc , not like .py )). zipfile and import from there (but it only helps with I / O overhead, depending on your OS, file system and disk: it doesn’t help with delays due to loading huge DLLs or executing initialization code in packages at boot time, which I suspect that it’s rather the culprits of slowness).
You can also consider splitting the application into multiprocessing - again using Queues (but a multiprocessor view) for communication - so that both the import and some heavy calculations are delegated to several auxiliary processes and thus be made asynchronous (this can also help to fully use several cores immediately). I suspect that, unfortunately, it can be difficult to properly configure for visualization tasks (for example, you are supposedly doing it with Mayavi), but it can help if you also have some “clean heavy computing” packages and tasks.