PHP DataMapper with Multiple Retention Levels

I am writing a PHP system that needs to write three persistence layers:

  • One web service
  • Two databases (one mysql one mssql)

The reason for this is outdated systems and cannot be changed.

I want to use the DataMapper template, and I am trying to establish the best way to achieve what I want. I have an interface, for example:

<?php $service = $factory->getService()->create($entity); ?> 

Below is some contrived and shortened code for brevity:

 <?php class Post extends AbstractService { protected $_mapper; public function create(Entity $post) { return $this->_mapper->create($post); } } class AbstractMapper { protected $_persistence; public function create(Entity $entity) { $data = $this->_prepareForPersistence($entity); return $this->_persistence->create($data); } } ?> 

My question is that since there are three levels of persistence, it is likely that three cards for each will also be needed. I would like a clean design inspired interface to make this work.

I see three options in it:

  • Add three cards to the service and create a call on each
  • $ _ mapper is an array / collection, and iterates through them, causing creation on each
  • $ _ mapper is actually a container object that acts like a proxy and calls are created on each

Something seems to me wrong in each of these solutions and would appreciate any feedback / recognized design templates that might fit that.

+8
oop php architecture datamapper
source share
3 answers

I had to solve a similar problem, but very many years ago in the days of PEAR DB. In this particular case, it was necessary to replicate data in multiple databases.

We did not have a problem with various databases having different mappings, although it was quite simple.

We made a facade of the DB class and redefined the getResult function (or what it was called). Then this function parsed SQL, and if it was read, it would send it to only one, and if it was a record, it would send it to everyone.

This really works great for a very heavily used site.

From this background, I would suggest completely shading all save operations. After you have done this, the implementation details are less relevant and can be changed at any time.

From this point of view, any of your implementation ideas seems like a reasonable approach. There are various things you want to think about.

  • What to do if one of the backends gives an error?
  • What is the effect of performance on writing to three database servers?
  • Can recording be performed asynchronously (if so, ask the first question again)

There is a potentially different way to solve this problem. That is, use stored procedures. If you have a primary database server, you can write a trigger that, when committed (or nearby), connects to another database and synchronizes the data.

If updating the data does not have to be immediate, you can force the main database to register the changes and have another script that regularly “feeds” this data to another system. Again, the issue of errors should be considered.

Hope this helps.

+2
source share

First, a little terminology: what you call three layers, in fact, are three modules, not layers. That is, you have three modules inside the save layer.

Now the basic premise of this problem is this: you MUST have three different storage logic corresponding to three different storage sources. This cannot be avoided. Therefore, the question arises of how to invoke a write operation on these modules (provided that you do not need to call all three for reading, or if you do, this is a separate question in any way).

Of the three options that you indicated, in my opinion, the first is better. Because it is the simplest of the three. The other two will still need to call the three modules separately, with additional work to implement the container or some kind of data structure. You still cannot avoid calling the three modules anywhere.

If you are working with the first option, you obviously need to work with interfaces to provide a single abstraction for the user / client (in this case, the service).

I want to say that: 1. Their inherent complexity in your problem, which you cannot simplify further. 2. The first option is better because the other two make things more complicated, not easy.

+1
source share

I think option number 2 is the best, in my opinion. I would go with that. If you had 10+ maps than option No. 3, it would be advisable to transfer the creation logic to the translator itself, but since you have a reasonable number of cartographers, it makes sense to simply enter them and sort through them. Extending functionality by adding another mapping device will simply add 1 line to your dependency injection configuration.

0
source share

All Articles