I had to solve a similar problem, but very many years ago in the days of PEAR DB. In this particular case, it was necessary to replicate data in multiple databases.
We did not have a problem with various databases having different mappings, although it was quite simple.
We made a facade of the DB class and redefined the getResult function (or what it was called). Then this function parsed SQL, and if it was read, it would send it to only one, and if it was a record, it would send it to everyone.
This really works great for a very heavily used site.
From this background, I would suggest completely shading all save operations. After you have done this, the implementation details are less relevant and can be changed at any time.
From this point of view, any of your implementation ideas seems like a reasonable approach. There are various things you want to think about.
- What to do if one of the backends gives an error?
- What is the effect of performance on writing to three database servers?
- Can recording be performed asynchronously (if so, ask the first question again)
There is a potentially different way to solve this problem. That is, use stored procedures. If you have a primary database server, you can write a trigger that, when committed (or nearby), connects to another database and synchronizes the data.
If updating the data does not have to be immediate, you can force the main database to register the changes and have another script that regularly “feeds” this data to another system. Again, the issue of errors should be considered.
Hope this helps.
drone.ah
source share