The bigger problem is that there is no standard data format or standard tool for moving data bigger than 5 GB. (This includes aws s3; which is not an industry standard)
Whoever builds the industry standard will get decision-making power over your specific issue.
In my bioinformatics work I often stream files between linux hosts and Amazon S3. This could look like:
This recently stopped working after upgrading:
I think I figured out why this is happening:
New versions of
scp
use the SFTP protocol instead of the SCP protocol. [1]SFTP may not download sequentially
With
scp
I can give the-O
flag:This does work, but it doesn't seem ideal: probably servers will drop support for the SCP protocol at some point? I've filed a bug with OpenSSH.
[1] "
man scp
" gives me: "Since OpenSSH 8.8 (8.7 in Red Hat/Fedora builds), scp has used the SFTP protocol for transfers by default."Comment via: facebook, mastodon