Without deep and dark rewriting of the main Python runtime (to ensure forcing a allocator that uses a given segment of shared memory and provides compatible addresses between disparate processes), there is no way to "exchange objects in memory" in any general sense. This list will contain a million addresses of tuples, each tuple consists of addresses of all its elements, and each of these addresses will be assigned to pymalloc in a way that will inevitably vary between processes and spread throughout the heap.
On any system other than Windows, you can create a subprocess that has read-only access to objects in the parent space of the process ... until the parent process modifies these objects. This happened when os.fork() was called, which in practice snapshots the entire memory space of the current process and starts another simultaneous copy / snapshot process. In all modern operating systems, this is very fast due to the βcopy to writeβ approach: pages of virtual memory that are not changed by any of the processes after the fork is not actually copied (access to the same pages is not shared); as soon as any process changes any bit on the previously shared page, poof, this page will be copied and the page table modified, so the modification process now has its own copy, while the other process still sees the original.
This extremely limited form of sharing in some cases can remain a lifesaver (although it is extremely limited: remember, for example, that adding a link to a shared object is considered as a βchangeβ of this object due to the number of links, and will force the page to be copied!). .. besides Windows, of course, where it is not available. With this single exception (which, I think, will not cover your use case), separating object graphics, which includes links / pointers to other objects, is basically not feasible - and almost any objects of interest to modern languages ββ(including Python) falls under this classification.
In extreme (but rather simple) cases, you can get shared use by refusing to represent your own memory of such graphs of objects. For example, a list of a million tuples, each with sixteen floats, can actually be represented as a single block with a total memory of 128 MB - all 16M floats in the IEEE double precision embedded to the end - with a slight latch on top to "make it look like ", you access things in the usual way (and, of course, not too little after-all-gaskets should also take care of the extremely hairy synchronization problems between processes that are bound to arise ;-). It only gets more fading and more complicated from there.
Modern concurrency approaches increasingly despise the "nothing in common" approach in favor of "nothing in common" when tasks exchange messages (even in multi-core systems using threads and shared address spaces, synchronization problems and performance reaches HW in terms of caching, pipeline etc., when large areas of memory are actively modified by several cores at the same time, repel people).
For example, the multiprocessing module in the Python standard library relies mainly on etching and sending objects back and forth, and not on shared memory (of course, not in R / W mode!).
I understand that this is not welcomed by the news for the OP, but if he needs to get several processors to work, he better think about having something to share, in places where they can be accessed and messaging changed - a database, a memcache cluster, a dedicated process that does nothing, but stores this data in memory and sends and receives it upon request, and other similar message passing architectures.