File synchronization across multiple servers

In a farm environment, is there a preferred method for storing files created / updated / deleted on the same server in synchronization with all other servers in the farm? For example, if a file is created by the user on server A, and the file is requested by the user on server B, what is the best way to make the file available on both servers at the same time? Is the same answer used for many (1000+) servers in the farm?

Although my specific question is mainly about Windows servers, it is preferable that the platform is practically the same.

0
source share
2 answers

You can use the distributed file system (DFS), which is built into the server OS. I did this to achieve a similar goal.

Essentially, you are configuring DFS to create a root, which is actually just a URI. You can create a \\ DOMAIN \ SHARE that looks like a shared resource, although it is virtual. DFS uses the DNS domain to represent it as a valid location. Inside the root, you can create links that are simply paths to physical shared folders on any number of servers. This will be the equivalent of the subdirectories under your root. Finally, for each link, you can create several goals. In your example, this will be the share on each of the machines. DFS then replicates the files in these folders along all the paths specified as targets using the file replication service.

It works very well for the two servers that I have. I do not know how much it scales when replicated to 1000 servers. This is an enterprise level decision, but I'm not sure if the number of cars will be administratively viable. Since you are an overtime machine, you probably will not need to copy at this scale, but rather use it as a service, such as an abstraction. The path is a constant.

Other caveats: you must have File Replication Service installed. I think you will also need a domain environment to really do the job.

+1
source

On the Linux side, you might be interested in DRDB .

0
source

All Articles