0

I've read this popular question here on the site:

How to copy a large number of files quickly between two servers

most answers suggest methods which only utilize a single network connection for the file transfer.

Now, suppose "fairness" to other users does not matter (e.g. I'm copying out of work hours.

How can we leverage multiple connections to improve the transfer speed?

Please suggest concrete solutions.

My personal experience seems to indicate that single-connection file transfers often do not utilize the full bandwidth (supposedly) available on a network, especially a LAN. But - maybe this is just an urban legend?

Note: My particular interest is copying between Linux systems, of multiple files, most of which are quite large - in case that has bearing on your answer.

1 Answer 1

3

Yes. Multi-stream tools can improve throughput when a single TCP flow underutilizes bandwidth. Options:

  • bbcp: bbcp -s 8 /file user@host:/dest/
  • rclone: rclone copy /src remote:/dst --transfers 8
  • rsync: run in parallel on subdirs/files (parallel -j8 rsync …).

To copy a single large file - split + parallel copy, then reassemble.

Also check TCP tuning and confirm disk I/O isn’t the bottleneck.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.