The memory is shared at the OS level. There is no easy way to tell which task and which thread belongs to a particular object.
In addition, there is no easy way to add a custom bookkeeping manager that will analyze which task or thread is allocating some memory and preventing too much allocation. You also need to enable the garbage collection code to free objects that are freed.
If you do not want to write your own Python interpreter, it is best to use a process per task.
You don’t even need to kill and update the interpreters every time you need to run another script. Combine several interpreters and only kill those that outgrow a certain memory threshold after running the script. Limit memory usage by translators with tools provided by the OS, if you need to.
If you need to share large amounts of shared data between tasks, use shared memory; for smaller interactions, use sockets (if necessary, the level of messaging above them). A.
Yes, it may be slower than the current setting. But from your use of Python, I suggest that in these scenarios you still don't do critical computing.
source share