When used rsyncin high latency and high bandwidth environments, your data transfer speed will be slower [1] than your available bandwidth. In the above example, the expected transfer rate will be 56.25 KB or less than 10% of the available bandwidth.
One solution is the parallel execution of N rsyncprocesses :
#!/bin/bash
tar -cvzf x.tar ${list_of_files}
md5sum x.tar > x.tar.md5sum
for ((i=1;i<=N;i++)); do rsync -avzh x.tar.${i} ${destination} & done
wait && echo "success" || echo "fail" && exit 1
TODO
scp x.tar.md5sum ${destination}
ssh ${destination_machine} "cd ${path} && md5sum -c x.tar.md5sum && echo 'PASS (files verified with md5sum)' || echo 'FAIL (file verification failed md5sum)' && exit 1"
[1] Why is your transfer rate slow in this example?
In a word: product with bandwidth delay (actually three words)
. rsync . rsync ( - , TCP TCP), .
, TCP- ( TCP- ), ACK, , . . , .
, 56,25 KiB 10% .
. rsync .
1:
, TCP- , - ( Google - - uftp, UDP TCP). , uftp - .
2:
rsync TCP , , .
3:
rsync , .