Read my extended answer for the edited question below.
The most trivial and naive approach was probably to create a script that simply runs rsync for each server you want to synchronize.
This is normal in most cases, but I do not think that this is what you are looking for, because you would understand it yourself ...
This method also has the following disadvantages:
One server sends all traffic, no cascading. So its only point of failure and bottleneck
It is very inefficient. Rsync is a great tool, but parsing a list of files and checking for differences is not very fast if you want to synchronize hundreds of servers.
But what can you do?
Setting up rsync for multiple servers is by far the easiest way. Therefore, you should start with this and optimize where your problems are.
You can speed it up, for example, using the desired file system. XFS will probably be 50 times faster than Ext3.
You can also use unison, which is a more powerful tool and stores the list of files in the cache.
You can also configure a cascade (synchronization of server A with server B, synchronized with server C).
You can also customize pulling instead of clicking on you customers. You may have a subdomain for what is the entry point to the load balancer, where you have 1 or more servers for which you synchronize by clicking on the source server.
The reason I am telling you all this is that there is no perfect way, you have to understand this depending on your needs.
However, I definitely recommend watching GIT.
Git is a version control system that is very efficient and effective.
You can create a git repository and click on your client machines.
It works very well and efficiently, has flexibility and scalability, so you can build almost everything that is in this structure, including distributed file systems, cascades, load balancing, etc.
I hope I have given you a few points in the right directions that you can explore.
Edit:
It looks like you want to synchronize changes on the same server - or even on the same hard drive (which I don’t know, but it’s very important for the features that you have).
Well, basically it doesn’t matter. Insert - Overwrite - Delete ... Rsync is also a great tool for this, as it transfers changes incrementally. Not only "resumes broken translations."
But I would say that it depends entirely on the content.
If you have many small files, for example, you say that the template, javascript, etc., rsync can be very slow. Perhaps it’s even faster to completely delete the original folder and copy the files there. Therefore rsync (or any other tool) should not check all files for changes, etc.
You can also just copy everything with the -rf switch so that everything is overwritten, but then you could have old files that were deleted.
I also know many cases where such things are done using subversion, because people feel like they have more control or something that I don’t know. It is also more flexible.
However, there is one thing you should think about:
There is a concept of shared data.
There are symbolic links and hard links.
You can put them in files and folders (hard links only to files. I don’t know why).
If you put Symlink A on target B, the file looks like it is located and named as a symbolic link, but the resource behind is somewhere else completely different. But applications MAY distinguish. For example, Apache must be configured to track symbolic links (otherwise it will be a security issue).
So, if you are all in the same folder, you can simply put a symbolic link called this folder, pointing to your folder there, and you no longer have to worry about synchronization, because they have the same resource.
However, there are reasons why you would not want to do this:
They look different. - This sounds absurd, but in fact it is the most common reason people don’t like symbolic links. People complain because they “look so weird in their program” or something else.
Symbols are limited in certain possibilities, but, therefore, have other huge advantages. Like cross-referencing a file system, etc. But. Almost all the flaws can be well handled and work in your application. The unfortunate truth is that symbolic links are a fundamental feature of linux oses and file systems, but their existence is sometimes forgotten when developing an application. It seems that he is developing a train, but forgets that there are people with long legs or something like that.
Hard links, on the other hand, really look like files because they are files.
And each hard link pointing to one file is the same file.
It sounds strange, but think about it this way:
Each file is some data on the disk. Then there is some inode pointer, which is inside some directory with some name pointing to this resource.
Hard links are easy. There are only a few “lists” of a file.
As a result, they use the same read lock, get modified / deleted / etc together.
However, this, of course, can be done only on one file system / device, and not on a multicast.
Links have some great advantages. They are quite obvious:
You have no duplicate data. This eliminates the possibility of inconsistency, and you do not need to update and need less disk space.
This, however, is of much greater importance.
For example, if you run several websites, and all of them use the Zend Framework.
This is a huge shitload structure, and operation code caching will be populated like 50 megabytes of your ram or something like that.
If you have the same zend library folder for your sites, you only need this once.