Sql Server Service Broker - a detailed example of using an external console application

I need guidance from anyone who has deployed a real-time production application that uses the external Sql Server Service Broker activation mechanism (via the external Service Broker activator from the Feature Pack).

Current thinking:

My specifications are pretty simple (or at least I think so), so I think of the following basic thread:

  • ordered object is added to Table_Orders with confirmed state

  • SP_BeginOrder is launched and performs the following actions:

    • transaction begins
    • launches a DIALOG from Service_HandleOrderState to Service_PreprocessOrder
    • stores the conversation handle (now using PreprocessingHandle) in a specific column of the Orders table
    • sends a MESSAGE of type Message_PreprocessOrder containing the order identifier using PreprocessingHandle
    • completes the transaction

    Notice that I am not ending the conversation, I do not want "fire-and-forget"

  • event notification on Queue_PreprocessOrder activates an instance of PreprocessOrder.exe (the maximum number of concurrent ones out of 1), which performs the following actions:

    • SqlTransaction begins
    • receives the first 1 message from Queue_PreprocessOrder
    • if the message type is Message_PreprocessOrder (XML format):
      • sets the pre-processing order status to Table_Orders using the order identifier in the message body
      • loads n collections of data from which the n-ary Carthesian product is calculated (via Linq, AFAIK this is not possible in T-SQL) to define order items collection
      • inserts rows of order items in the table_OrderItems
      • sends a MESSAGE of type Message_PreprocessingDone containing the same order ID using PreprocessingHandle
      • concludes the conversation related to PreprocessingHandle
    • commits SqlTransaction
    • terminates with Environment.Exit (0)
  • internal activation on Queue_HandleOrderState performs SP (maximum number of concurrents out of 1), which:
    • transaction begins
    • receives the first 1 message from Queue_InitiatePreprocessOrder
    • if the message type is Message_PreprocessingDone:
      • sets the order state to “processing” in Table_Orders using the order identifier in the message body
      • launches a DIALOG from Service_HandleOrderState to Service_ProcessOrderItem
      • stores the conversation descriptor (now ProcessOrderItemsHandle) in a specific column Table_Orders
      • creates a cursor for the rows in Table_OrderItems for the current order ID and for each row:
        • sends a MESSAGE of type Message_ProcessOrderItem containing the identifier of the order item using ProcessOrderItemsHandle
    • if the message type is Message_ProcessingDone:
      • sets the order status to “processed” in Table_Orders using the order identifier in the message body
    • if the message type is http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog (END DIALOG):
      • ends a conversation related to the message conversation descriptor
    • completes the transaction
  • event notification on Queue_ProcessOrderItem activates an instance of ProcessOrderItem.exe (maximum number of concurrent ones out of 1), which performs the following actions:
    • SqlTransaction begins
    • receives the first 1 message from Queue_ProcessOrderItem
    • if the message type is Message_ProcessOrderItem (XML format):
      • sets the status of the order item in "Processing" in Table_OrdersItems using the identifier of the order element in the message body, and then:
        • loads a collection of order item parameters
        • makes the HttpRequest URL using parameters
        • stores HttpResponse as a PDF in the file system
      • If any errors occurred in the above substeps, sets the status of the order item to "error", otherwise "ok"
      • performs a search in Table_OrdersItems to determine if all order items are processed (state "ok" or "error")
      • if all order items are processed:
        • sends a MESSAGE of type Message_ProcessingDone containing the order identifier using ProcessOrderItemsHandle
        • ends a conversation related to ProcessOrderItemsHandle
    • commits SqlTransaction
    • terminates with Environment.Exit (0)

Notes:

  • The specification indicates MSSQL compatibility from 2005 to 2012, therefore:
    • no UNLOADING GROUP
    • no TALKS PRIORITIES
    • no POISON_MESSAGE_HANDLING (STATUS = OFF)
  • I strive to achieve integrity and continuity of flow, not speed.
  • given that tables and SPs are in DB1, while Service Broker objects (messages, contracts, queues, services) are in DB2, DB2 is SET TRUSTWORTHY

Questions:

  • Are there significant design flaws in the described architecture?
  • Tracking order completion status does not seem correct. Is there a better method? Perhaps using QUEUE RETENTION?
  • My intuition tells me that in no case does an activated external exe end with an exit code other than 0, so Main should have try{..}catch(Exception e){..} finally{ Environment.Exit(0) } . Is this assumption correct?
  • How would you organize error handling in DB code? Is the error log table sufficient?
  • How would you organize error handling in C # exe external code? The same error logging table?
  • I have seen SQL Server Service Broker Product Samples , but the Service Broker interface seems redundant for my seemingly simpler case. Any alternatives to the simpler Service Broker object model?
  • Any “portable” admin tool for a service broker that can at least drain poisonous messages?
  • Do you have suitable code samples for any of the above?
+6
source share
1 answer

Q: Are there significant design flaws in the described architecture?
A: A pair of background perks:
- Waiting for the completion of the HTTP request while maintaining an open transaction. In any case, you cannot achieve transaction consistency between the database and HTTP, so do not risk the transaction stretching for several minutes when HTTP was slow. Typical pattern: {begin tran / receive / start conversation timer / commit} then issues an HTTP call without any xact DB. If the HTTP call succeeds, then {begin xact / send response / end conversation / commit}. If HTTP fails (or client failure), then let the talk time activate you again. You will receive a timer message (without a body), you need to pick up the identifier of the element associated with the descriptor from your tables.

Q: Tracking order completion status does not seem correct. Is there a better method? Is it possible to use QUEUE RETENTION?
A: My criticism of state tracking is the dependence on scanning order elements to determine that the current processed is the last (5.3.4). For example, you can add information that this is the "last" element that will be processed in the state of the element, so that you know, when processing it, that you need to report completion. RETENTION is only useful when debugging or when you have logic that requires the launch of a “logical rollback” and to compensate for actions in case of a conversation error.

Q: My intuition tells me that in no case does an activated external exe end with an exit code other than 0, so you need to try {..} catch (Exception e) {..} finally {Environment.Exit (0)} in Main Is this assumption correct?
A: The most important thing is that the activated process issues a RECEIVE statement in the queue. If this fails, the queue monitor may enter the notified state permanently . The exit code, if I remember correctly, does not matter. As with any background process, it is important to catch and log exceptions, otherwise you won’t even know that it has a problem when it starts to crash. In addition to the disciplined try / catch blocks, Hookup Application.ThreadException for user interface applications and AppDomain.UnhandledException for applications with and without a user interface.

Q: How would you organize error handling in DB code? Are the error log tables sufficient?
Answer: I will follow this. The error log table is imho enough.

Q: How would you organize error handling in C # exe external code? The same error log table?
A: I created bugcollect.com precisely because I had to deal with such problems with my own applications. The problem is not only logging, but also collecting and analyzing (at least to detect duplicate reports) and suppressing floods of errors due to any changes to the config configuration in the field. In truth, there are more options now, for example. exceptron.com . And of course, I think FogBugs also has logging capabilities.

Q: I saw samples of SQL Server brokerage service products, but the Service Broker interface seems redundant for my seemingly simpler case. Any alternatives to the simpler Service Broker object model?
finally, the easy question: Yes, that's too much. There is no simple model.

Q: Any “portable” administrative tool for a service broker that can at least drain poisonous messages?
Answer: The problem with poisonous messages is that the definition of a poisonous message changes with your code: a poisonous message is any message that violates the current protection installed in place to detect it .

Q: Do you have decent code examples for any of the above?
A: no

Another point: try to avoid the link from DB1 to DB2 (for example, 4.3.4 activates in DB1 and reads the table of elements from DB2). This creates cross-database dependencies that break when a) one database is offline (for maintenance, for example) or is overloaded or b) you add database mirroring for HA / DR and one database fails. Try to get the code to work even if DB1 and DB2 are on different machines (and without connected servers). Add additional information to the message payload if necessary. And if you archive in such a way that DB2 can be on another computer, and even several DB2 machines can exist to scale the work of writing HTTP / PDF.

And finally: this design will be very slow. I'm talking about low messages per second slower, with so many dialogs / messages and all with max_queue_readers 1. This may or may not be acceptable to you.

+2
source

All Articles