I am wondering if it is possible to limit the amount of memory that the stream uses? I am looking at starting a server on which an unreliable user code is presented and running. I can use SafeHaskell to ensure that it does not perform unauthorized I / O, but I need to make sure that the user code does not break the entire server, i.e. Causes a stack overflow or memory heap error. / p>
Is there a way to limit the amount of memory that each individual thread can access, or to somehow guarantee that if one thread consumes a huge amount of memory, then only this thread terminates?
Perhaps there is a way when any thread encounters an error out of memory, I can catch the exception and choose which thread is dying?
I talk more about concurrency, in the sense of forkIO and STM streams, and not about parallelism with par and seq.
Note: this is very similar to this question , but it never received an answer to the general problem, and the answers concerned a specific scenario of the question. Also, is it possible that something could happen in GHC 7.8 from 2011, maybe with a new I / O manager?
source share