Consider the following application: a web search server, which at startup creates a large index in the memory of web pages based on data read from disk. After initialization, the in-memory index cannot be changed and several threads are launched to serve user requests. Suppose the server is compiled into native code and uses OS threads.
The threading model now does not contain isolation between threads. An erroneous thread, or any secure code that is not associated with a thread, can ruin an index or corrupted memory that has been allocated and logically belongs to another thread. Such problems are difficult to detect and debug.
Theoretically, Linux allows for better isolation. After the index is initialized, the memory it occupies can be read-only. Themes can be replaced by processes sharing an index (shared memory), but different from those that have separate heaps and cannot damage each other. Illegal operation is automatically detected by the hardware and operating system. No mutexes or other synchronization primitives are needed. Memory jumps are completely eliminated.
Is such a model possible in practice? Are you aware of any real application that does such things? Or maybe there are some fundamental difficulties that make such a model impractical? Do you think that this approach can lead to an increase in performance compared to traditional threads? Theoretically, the memory used is the same, but are there some implementation problems that slow down the work?
source share