Implementing Fast.NET Lock-Free Inter-Process Using SharedMemory MMF

I am new to multitasking and IPC , and I am trying to build an approach for quick interaction between processes using shared memory (I first studied the term IPC, referring to wcf sockets and named pipes , in order to ultimately detect MMF).

Now that I have successfully completed a small test using shared memory between two processes using the Lock and EventWaitHandle , I come up with an approach that implements a pattern without blocking / no waiting. Now I'm trying to combine Thread.MemoryBarrier() and read the signalling Sector from a MemoryMapedFile .

The problem is not defined! The first round passes, and the second last sits in the triangle of Bermuda ... outside the scope of the debugger ...

say the process sends a request packet for the showMsg() process to b .

  //offset positions in mmf MemoryMappedViewAccessor MmfAcc; const int opReady= 0, opCompleteRead = 4, ..... ReadTrd() { //[0,3] - Reader is stationed //[4,7] - Read Complete successfully //[8,11] - Data-size //[12,15] - Reader-exiting "format" the signals Section (write zeroes). for(;;){if (WrTrd-StepMMF1 Confimed) break;} MmfAcc- read DataSize val @offset[8] MmfAcc- read Data val @offset[50] MmfAcc Write exit to offset.... ....heavy use of Thread.MemoryBarrier(); sets !!! (all over the place, on every shared variable...) } writeTrd() { heavy use of Thread.MemoryBarrier() !!! //[15-19] - wr is stationed //[20-23] - wr Complete successfully //[24-27] - wrExiting "format" the signals Section . for(;;){if Reader-StepMMF1 is Confim break;} MmfAcc- DataSize to offset[8] write Data To offset[50] using the method below for(;;){if Read StepMMF2 is Confim break;} } 

Since I first used the named solution with pipes, the Mmf approach (despite Lock and Event Wait Handle) was a big performance gain compared to the namedpipe approach, but could I go further than the above approach somehow ..?

I could just clone this pattern like a striped raid ...

 Reader1 + Reader2 & WriteThred1 + WriteThread2 

so I tried and stuck at this point.

Is this a valid approach using full-memoryfence and sharedmemory for signaling?

If so, it remains to understand why the second iteration failed, the difference in performance.

EDIT - added logic for testing additional threads

This is the "Bridge" that I use to control the flow of writers (the same approach for readers.

 public void Write(byte[] parCurData) { if (ReadPosition < 0 || WritePosition < 0) throw new ArgumentException(); this.statusSet.Add("ReadWrite:-> " + ReadPosition + "-" + WritePosition); // var s = (FsMomitorIPCCrier)data; ////////lock (this.dataToSend) ////////{ Thread.MemoryBarrier(); LiveDataCount_CurIndex = dataQue.Where(i => i != null).Count(); this.dataQue[LiveDataCount_CurIndex] = parCurData; Console.WriteLine("^^^^^" + Thread.CurrentThread.Name + " has Entered WritingThreads BRIDGE"); Console.WriteLine("^^^^^[transactionsQue] = {1}{0}^^^^^[dataQue.LiveDataASIndex = {2}{0}^^^^^[Current Requests Count = {3}{0}", "\r\n", Wtransactions, LiveDataCount_CurIndex, ++dataDelReqCount); //this.itsTimeForWTrd2 = false; if (Wtransactions != 0 && Wtransactions > ThrededSafeQ_Initial_Capcity - 1) if (this.dataQueISFluded) this.DataQXpand(); if (itsTimeForWTrd2) { bool firstWt = true; while (writerThread2Running) { if (!firstWt) continue; Console.WriteLine("SECOND WRITERThread [2] is In The CoffeeCorner"); firstWt=false; } this.dataDelivery2 = this.dataQue[LiveDataCount_CurIndex]; Console.WriteLine("Activating SECOND WRITERThread [2]"); itsTimeForWTrd2 = false; writerThread2Running = true; //writerThread1Running = true; writerThread2 = new System.Threading.Thread(WriterThread2); writerThread2.IsBackground = true; writerThread2.Name = this.DepoThreadName + "=[WRITER2]"; writerThread2.Start(); } else { bool firstWt = true; while (writerThread1Running) { if (!firstWt)continue; Console.WriteLine("WRITERThread [1] is In The CoffeeCorner"); firstWt=false; } Console.WriteLine("Activating WRITERThread [1]"); this.dataDelivery1 = this.dataQue[LiveDataCount_CurIndex]; writerThread1Running = true; writerThread1 = new System.Threading.Thread(WriterThread1); writerThread1.IsBackground = true; writerThread1.Name = this.DepoThreadName+"=[WRITER1]"; writerThread1.Start(); itsTimeForWTrd2 = true; } Thread.MemoryBarrier(); } 

Using a write descriptor to read and write actual data (similar code to write)

 public unsafe byte[] UsReadBytes(int offset, int num) { byte[] arr = new byte[num]; byte* ptr = (byte*)0; this.accessor.SafeMemoryMappedViewHandle.AcquirePointer(ref ptr); Marshal.Copy(IntPtr.Add(new IntPtr(ptr), offset), arr, 0, num); this.accessor.SafeMemoryMappedViewHandle.ReleasePointer(); return arr; } 

As I said, I investigated this problem of data synchronization and shared memory using non-blocking / non-pending / etc. Locks semaphores, so I'm trying to remove any overhead during each data transaction into a shared memory file. I'm here to ask, what could be the problem that fixes Lock And The EventWaitHandle and replacing it with memory pickup and alarm logic via mmf?

+6
source share
1 answer

If you plan to use this for a specific purpose other than R&D, the easiest way would be to use a library that already provides this. One way to think about it is Lock Free = Message Passing. Two possible approaches are possible: for a minimal implementation of ZeroMQ IPC messaging, which provides excellent .Net support and excellent IPC performance ( http://www.codeproject.com/Articles/488207/ZeroMQ-via-Csharp-Introduction ) and for a more complete implementations of the Actor-Model (including IPC support), look at Akka.net ( http://getakka.net/docs/#networking ).

If the goal is more R&D in nature (that is, you want to write your own implementation, which is great), I would still suggest looking at the source of these products (especially Akka.net, since it is written in C #), for Actor based messaging and IPC implementation ideas.

0
source

All Articles