Controlled Application Closure Strategy

Our (Windows native C ++) application consists of stream objects and managers. It is fairly well written, with a design that sees Manager objects that manage the life cycle of their minions. Various objects for sending and receiving events; some events come from Windows, some of them are home-based.

In general, we should be very aware of the interaction of threads, so we use manual synchronization technologies using critical Win32 partitions, semaphores, etc. However, sometimes we encounter a shutter deadlock during a shutdown due to things like event handler forwarding.

Now I wonder if there is a decent application closure strategy that we could implement to make it easier to develop - something like any object that logs a shutdown event from the central controller and changes its execution behavior accordingly? Is it too naive or fragile?

I would prefer strategies that do not rewrite the entire application in order to use the Microsoft parallel template library or similar .; -)

Thanks.

EDIT:

I assume that I am asking for an approach to managing the life cycles of objects in a complex application, where many threads and events are constantly triggered. Giovanni's suggestion is obvious (our manual work), but I am convinced that there should be various ready-made strategies or frameworks for cleanly closing active objects in the correct order. For example, if you want to create your own C ++ application in the IoC paradigm, you can use the PocoCapsule instead of trying to develop your own container. Is there something similar for managing the life cycles of an object in an application?

+4
source share
3 answers

One possible general strategy would be to send an “I finish” event for each manager, which will force managers to do one of three things (depending on how long your event handler is and how much latency you want the user to start shutting down, and the application really came out).

1) Stop receiving new events and start handlers for all events received before the "I am completing" event. To avoid blocking, you may need to accept events that are critical to completing other event handlers. They can be signaled by a flag in the event or by the type of event (for example). If you have such events, you should also consider restructuring your code so that these actions are not performed using event handlers (since dependent events would also be prone to deadlocks in normal operation).

2) Stop accepting new events and cancel all events that were received after the event that is currently being executed by the handler. Similar comments about dependent events apply in this case.

3) Interrupt the currently running event (with a function similar to boost::thread::interrupt() ), and do not fire further events. This requires your handler code to be safe for exceptions (which should already be if you care about resource leakage) and introduce breakpoints at fairly regular intervals, but this leads to minimal delay.

Of course, you could combine these three strategies together, depending on the specific latency and data failure requirements of each of your managers.

+1
source

This seems like a special case of a more general question: "How can I avoid deadlocks in my multi-threaded application?"

And the answer to this question is, as always: make sure that at any time your threads must acquire more than one lock at a time, so that they all acquire locks in the same order and make sure that all threads release locks for a finite time. This rule applies the same way at shutdown as at any other time. Nothing less good; nothing else is needed. (See here for relevant discussion)

How best to do this ... the best way (if possible) is to simplify your program as much as you can, and avoid storing more than one lock at the same time if you can help it.

If you absolutely must store multiple locks at the same time, you should check your program to make sure that each thread containing multiple locks locks them in the same order. Programs such as helgrind or Intel thread checker can help with this, but often it comes down to simply looking for the code until you prove yourself that it satisfies this limitation. In addition, if you can easily reproduce deadlocks, you can check (using the debugger) the stack trace of each deadlock thread, which will show where the blocked threads are blocked forever, and with this information you can start to find out where the lock mismatches are in your code. Yes, it’s a big pain, but I don’t think there is a good way to get around this (in addition to avoid simultaneously locking multiple locks). :(

+3
source

As a general method, use the atomic boolean parameter to indicate “I close”, then each thread checks this boolean before receiving each lock, processing each event, etc. I cannot give a more detailed answer if you do not give us a more detailed question.

+1
source

All Articles