Replication of the main database to different slaves

I have a main database that will be a cloud server consisting of different schools.

The type of dashboard that displays information about each school. Can edit their information and other data.

Now these schools are deployed in the appropriate school location, which will be the local server.

A type of dashboard that can only edit a specific school deployed on a local server. Can edit their information and other data.

Now I want to sync cloud to local server in the appropriate school if something has changed. This also applies to local to cloud server .

Note. If you have ever tried to use Evernote, it can edit note information on any device that you use, and can still sync when you use the Internet or manually click sync.

When the local server does not have an Internet connection and edits some data at school. After the Internet has risen, data from the local and cloud servers should be synchronized.

This is the logic that I am pursuing.

enter image description here

Someone shed light on where to start? I could not come up with any solution that would fit my problem.

I am also thinking about using php for a foreach loop across the table and data corresponding to the current date and time. But I know that it will be so bad.

Edited: I deleted links / posts from other SO questions on this.

The application bindings I found are

  • Evernote
  • Todoist

Servers:

  • Local Server Computer: Windows 10 (Deployed in Schools)
  • Cloud server: maybe some dedicated hosting that uses phpmyadmin

Not to be picky, but hopefully the answer will be that you are talking to a newbie to handle a subordinate database process. I have no experience for this.

+7
php mysql phpmyadmin database-replication
source share
6 answers

When we did this, we would:

  • Make sure that every table we wanted to synchronize had datetime columns for Create; Modified; and deleted. They would also have a boolean column isDeleted (instead of physically deleting entries, we will point it to true and ignore it in queries). This means that we can request any records that have been deleted since a certain time and return an array of these deleted identifiers.
  • In each database (Master and slave), create a table in which the last successful synchronization time is stored. In the master table, this table stores several records: 1 for each school, but in the subordinate, it just needs 1 record - the last time it is synchronized with the master.

In your case, each so often each of the subordinates will:

  • Call the web service (URL) of the wizard, say helloMaster. It will take place in the school name (or in some specific identifier), the last time they are successfully synchronized with the master, authentication information (for security) and await a response from the master about whether the master had any updates for the school from that time and time provided. In fact, the point here is simply looking for confirmation that the wizard is available and listening (i.e. the Internet is still working).

  • Then the subordinate will call another web service, say "sendUpdates". It will go back to the name of the school, the last successful synchronization (plus security authentication data) and three arrays for any added, updated and deleted records since the last synchronization. The wizard simply confirms receipt. If the receipt was confirmed, then the slave must go to step 3, otherwise the slave will try again to perform step 1 after a pause of some duration. So, now the Master has updates from the subordinate. Note. Decide how to merge any records if there are conflicts with its pending slave updates.

  • Then the subordinate call calls the web service, and says 'getUpdates'. It passes the name of the school, the last successful synchronization, security authentication data, and the wizard then returns three arrays to it for any added, updated and deleted records that it should use for the slave in its database.

    / li>
  • Finally, as soon as the subordinate tries to update his records, he will then notify the master of success / failure through another web service, say, “updateStatus”. If successful, the master will return a new synchronization date to store the slave (this will exactly match the date that the master stores in its table). If this fails, the error is logged in the main unit, and we return to step 1 after a pause.

I dwelled on error handling in detail, getting the exact time on all devices (there may be different time zones) and some other fragments, but that’s the point.

I can make clarifications by thinking more about it (or others can edit my post).

Hope this helps at least.

+3
source share

I suggest you go with a Trivial solution, which for me:

  • Create SQLlite or any database (MySQL or your choice) on the local server
  • Keep an always-running thread that will ping (makes an API call) to your main database every 5 minutes (depending on how much delay is received)
  • Using this stream, you can determine whether you are connected to the Internet or not.
  • If connected to the internet

    a) Send local changes with the request to the main server, this main server is an application server that will be able to update changes to the local computers in the school (you received this change when you called the API) to the main database after certain checks according to your application.

    b) Receive updated changes from the server after calling the API, these changes are serviced after conflict resolution (for example, if the data on the school server was updated earlier than the data updated in the main database, so that you agree based on your requirement).

  • If you are not connected to the Internet, save the changes to the local database and reflect these changes in the application that work at school, but when you connect, push these changes to the main server and click the actual changes that are applicable from the main server.


This is difficult to do on my own, but if the scale is small, I would prefer to implement my own APIs for database applications that will connect in this way.


The best solution would be to use Google Firebase, which is a real-time database that updates asynchronously whenever there is a change on any machine, but may cost you more if it is not really needed. But yes, it does give you real-time editing features like Evernote for your database systems.

+3
source share

This is not a problem that can be solved by database replication.

In general, database replication can work in one of two modes:

  • The master / slave replication used by MySQL. In this mode, all records must be redirected to one "main" server, and all replica databases receive a stream of changes from the master.

    This does not meet your needs, as recordings can only be made by the owner. (Modification of one of the replicas will directly lead to the fact that it will cease to synchronize with the master.)

  • Quorum-based replication used by some newer databases. All database replicas connect to each other. As long as at least half of all replicas are connected (ie, the Cluster has reached a “quorum”), entries can be made to any of the active databases and will be distributed to all other databases. A database that is not connected will be updated when it joins the quorum.

    It is also not suitable for your needs, as an unrelated replica cannot be recorded. Worse, if more than half of all replicas are disconnected from the master, this will not allow writing the rest of the databases to any of them.

You need some kind of solution for data synchronization. Any solution will require some logic that you will have to write! - to resolve conflicts. (For example, if a record was changed in the main database and the local school replica was disabled, and the same record was changed there, you will need some way to reconcile these differences.)

+2
source share

No complicated settings or API needed. MySQL makes it easy to replicate your database. MySQL will ensure correct and timely replication whenever the Internet is available. (and fast too)

There is:

  • Master Slave: The master edits slave readings or, in other words, one-way synchronization from master to slave.
  • Master - Master: Master1 edits master2 reading and editing or, in other words, two-way synchronization. Both servers will push and retrieve updates.

assuming your cloud server has a scheme for each school, and each scheme is available with its own username and password. ie db_school1, db_school2

Now you have the ability to replicate only the selected database schema from your cloud to the on-premises wizard. In your case, a school one local master will < replicate db_school1

if you want to replicate only a specific table, MySQL also has this option "replicate-do-table"

the actual replication process is very simple, but can be very deep if you have different scenarios.

a few things you want to take note of, server IDs, different auto-increment values ​​on each server to avoid conflicts with new entries. ie Master1 generates entries on an odd number, Master 2 on even numbers, so there will be no repeated problems with the primary key. Server alerts / monitoring down, skipping errors

I'm not sure if you work on linux or windows, I wrote a simple C # application that checks if any of the wizards are not replicating or stopping for any reason and sending email. monitoring is critical!

here are some links for replicating the master: https://www.howtoforge.com/mysql_master_master_replication

https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication

also worth reading this optimized tabl-level replication information: https://dba.stackexchange.com/questions/37015/how-can-i-replicate-some-tables-without-transferring-the-entire-log

hope this helps.

+1
source share

Edit:

The original version of this answer was suggested by MongoDB; but with further reading, MongoDB is not so reliable with dodgy internet connections. CouchDB is designed for standalone documents that you need, although unfortunately it's harder to get a gong than MongoDB.


Original:

I would suggest not using MySQL, but deploy a repository of documents designed for replication, such as CouchDB - unless you go to the commercial MySQL Clustering Services.

Being a lover of the power of MySQL, it's hard for me to suggest you use something else, but in this case you really need to.

That's why -

Problems Using MySQL Replication

Why MySQL had good replication (and, most likely, what you should use if you are synchronizing a MySQL database as recommended by others), there are some things to keep in mind.

  • A “unique key” collision will give you a massive headache; most likely reason for this is the Auto Incrementing identifiers, which are common in MySQL applications (do not use them for synchronization if this is a clear read + write → read-only relationship that is not in your case.)
  • Primary keys must be generated by each server, but unique to all servers. Perhaps adding a combination of a server identifier and a unique identifier for this server (Server1_1, Server1_2, Server1_3, etc. Will not collide with Server2_1)
  • MySQL only supports synchronization on the go, unless you look at their clustering solutions ( https://www.mysql.com/products/cluster/ ).

Manual problems with time stamping.

In another answer, it is recommended that you keep the "Updated on time" entries. While I did this approach, there are some big problems to be careful.

Manual problems with logging.

Journalling keeps a separate record of what has changed and when. "Database X, table Y, field Z was updated to A at time B" or "A new record was added to table A with this data [...]." This allows you much better control over what to update.

  • If you look at the database synchronization methods, this is actually what happens in the background; in case of MySQL, it stores a binary update log
  • you only share the magazine, not the original entry.
  • When another server receives a log entry, if it has a much larger report of what happened before / after, and can play back updates and ensure that you get the correct data.
  • problems arise when a log / database exits Sync (MySQL is actually a pain when this happens!). You need to have an “update” script ready for roll, which is located outside the journal, which will synchronize the database with the master.
  • It's complicated. So that...

Decision. Using a repository of documents designed for replication, for example. Mongodb

With all this in mind, why not use a document repository that already does all this for you? CouchDB supports and processes all journaling and synchronization ( <a6> ).

There are others, but I believe that you will have fewer headaches and mistakes than with other solutions.

+1
source share

A wizard to handle replication in MySQL can be executed without serious disruptions when using auto_increment. Here is a link that explains how.

If you have tables without primary keys, I'm not sure what will happen (I always include the auto_increment primary keys in the tables)

http://brendanschwartz.com/post/12702901390/mysql-master-master-replication

An increment with an automatic increment and an automatic increment increases the auto_increment values, as shown in the configuration samples from the article ...

 server_id = 1 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates = 1 auto-increment-increment = 2 auto-increment-offset = 1 server_id = 2 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates = 1 auto-increment-increment = 2 auto-increment-offset = 2 
+1
source share

All Articles