First you need to understand some internal elements. For each user [x], ASP.Net will use one workflow. One workflow contains multiple threads. If you use multiple instances in the cloud, this is even worse, because then you also have multiple instances of the server (I assume this is not the case).
A few issues here:
- You have multiple users and therefore multiple threads.
- Multiple threads can lock each other by writing files.
- You have several application domains and, therefore, several processes.
- Several processes can block each other.
Opening and locking files
File.Open has several flags to block. Basically, you can block files exclusively for each process, which is a good idea in this case. The two-step approach with Exists and Open will not help, because there can be something between the other workflow. Basically, the idea is to call Open with exclusive write access, and if it doesn't work, try again with a different file name.
This basically solves the problem with multiple processes.
Recording from multiple streams
File access is single-threaded. Instead of writing material to a file, you can use a separate stream to access the file and several streams that tell you what to write.
If you have more journal requests than you can handle, you end up in the wrong zone anyway. In this case, the best way to handle this for registering an IMO is to simply drop the data. In other words, make the registrar somewhat lost in order to make life better for your users. You can also use a queue for this.
I usually use a ConcurrentQueue for this and a separate thread that deletes all the logged data.
This is basically how to do it:
// Starts the worker thread that gets rid of the queue: internal void Start() { loggingWorker = new Thread(LogHandler) { Name = "Logging worker thread", IsBackground = true, Priority = ThreadPriority.BelowNormal }; loggingWorker.Start(); }
We also need to do something for the actual work and some common variables:
private Thread loggingWorker = null; private int loggingWorkerState = 0; private ManualResetEventSlim waiter = new ManualResetEventSlim(); private ConcurrentQueue<Tuple<LogMessageHandler, string>> queue = new ConcurrentQueue<Tuple<LogMessageHandler, string>>(); private void LogHandler(object o) { Interlocked.Exchange(ref loggingWorkerState, 1); while (Interlocked.CompareExchange(ref loggingWorkerState, 1, 1) == 1) { waiter.Wait(TimeSpan.FromSeconds(10.0)); waiter.Reset(); Tuple<LogMessageHandler, string> item; while (queue.TryDequeue(out item)) { writeToFile(item.Item1, item.Item2); } } }
Basically, this code allows you to remove all elements from a single thread using a queue shared by threads. Note that ConcurrentQueue does not use locks for TryDequeue , so clients will not feel any pain because of this.
The last thing you need is to add material to the queue. This is the easy part:
public void Add(LogMessageHandler l, string msg) { if (queue.Count < MaxLogQueueSize) { queue.Enqueue(new Tuple<LogMessageHandler, string>(l, msg)); waiter.Set(); } }
This code will be called from multiple threads. This is not 100% correct, because Count and Enqueue need not be called sequentially, but for our purposes and goals this is good enough. It also does not block in Enqueue , and waiter ensures that the material is removed by another stream.
Wrap it all in a singleton pattern, add some more logic to it, and your problem should be solved.