I'm currently going to create a reference architecture for an event-based distributed system where events are stored in a SQL Server Azure database using simple old tables (without SQL Server Service Broker).
Events will be processed using work roles that will poll the queue for new event messages.
In my research, I see a series of solutions that allow multiple processors to process messages from the queue. The problem that I encounter with a lot of patterns that I see is the difficulty of controlling the lock, etc., when several processes try to access the same message queue.
I understand that the traditional queue pattern is that multiple processors are pulled from the same queue. However, assuming that event messages can be processed in any order, is there a reason for not just creating a one-to-one relationship between the queue and its queue processor and simply balancing the load between different queues?
queue_1 => processor_1
queue_2 => processor_2
This implementation avoids all the plumbing needed to control concurrent queue access on multiple processors. An event publisher can use any load balancing algorithm to decide which queue to send messages to.
The fact that I donβt see this kind of implementation in any of my search queries makes me think that I miss a big deficit in this design.
Edit
MSMQ, Azure .. , , Durable Message Buffers Azure AppFabric. , SQL Azure . , .
user280610