Replication inside the graph vs Replication between graphs

I am trying to understand the differences between betweens Replication in a graph and Replication between graphs described in a distributed tensor stream , especially about how data is synchronized between multiple devices.

I understand that in replication in a graph, every worker does not save the replica of the model locally, which, apparently, is the main difference from replication between graphs .

Therefore, for replication in the graph, each input tensor for each operation will be a connection to where the variable was stored (possibly on another machine of the parameter server). In replication between graphs, data is pulled in batch mode to synchronize all parameters.

Am I right on this interpretation?
Does this mean that for each replication operation on the graph, data is retrieved from PS?
Is this pull pulling or asynchronous?

+5
source share

All Articles