Usually, you should avoid implementing the Observer pattern from the database.
Why? It relies on the proprietary (non-standard) technology of the supplier, helps to block the entrance to the database and supports the risk, and causes a little bloat. From an enterprise point of view, if this is not done in a controlled way, it may look like “skunkworks” - introducing an unusual way of behavior, usually covered by application and integration patterns and tools. If implemented at a fine-grained level, this can lead to a tight connection with tiny data changes with a huge amount of unpredictable messages and processing, which will affect performance. The additional cog in the machine may be an additional breakpoint - it may be sensitive to O / S settings, network and security, or it may be a security vulnerability in the technology of the provider.
If you are tracking transaction data managed by your application:
- implement the Observer pattern in your application. For instance. In Java, the CDI and javabeans specifications support this directly, and the OO custom design as per the Gang Of Four book is the perfect solution.
- optionally sending messages to other applications. Filters / interceptors, MDB messages, CDI events, and web services are also useful for notification.
If users directly modify the master data in the database, then either:
- provides a unique admin page in your application to manage OR master data updates
- provides a separate application for managing master data and sends messages to dependent OR applications
- (best approach) allows you to manage changes to the master data in terms of quality (reviews, testing, etc.) and time (process the same as changing the code), promote through environments, deploy and update the data transfer / restart application to a managed schedule
If you observe transactional data managed by another application (integration with a common database) OR you use data-level integration such as ETL to provide data to your application:
- try to have data objects written by only one application (read-only by others)
- ETL checklist / ETL control table to understand that / when OR changes occurred
- use the proprietary JDBC / ODBC extension for notification or polling, also mentioned in Alex Poole OR's answer
- The refactor overlaps data operations from 2 applications to a common SOA service, can either avoid the monitoring requirement, or take it away from data operations to a higher level SOA / application message.
- use an ESB or a database adapter to call your notification application or WS endpoint for mass data transfer (e.g. Apache Camel, Apache ServiceMix, Mule ESB, Openadaptor).
- Avoid using database expansion infrastructure such as pipes or extended queues
If you use messaging (send or receive), do it from your application (s). Messages from the database are a bit antipattern. As a last resort, you can use triggers that invoke web services ( http://www.oracle.com/technetwork/developer-tools/jdev/dbcalloutws-howto-084195.html ), but this requires a lot of caution, a very crude way , invoking a business (sub) process when the data set changes, rather than a crunch of fine-grained operations such as CRUD. It is best to run the task and force the work to call the web service outside of the transaction.
source share