You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.

In this video I walk through my favorite everyday flags for rsync.

Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/

Here’s a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync

Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ

Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc

Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    13 hours ago

    I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we’re already talking about rsync, I guess I may as well ask if this is right way to go?

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      It depends

      rsync is fine, but to clarify a little further…

      If you think you’ll stop the transfer and want it to resume (and some data might have changed), then yep, rsync is best.

      But, if you’re just doing a 1-off bulk transfer in a single run, then you could use other tools like xcopy / scp or - if you’ve mounted the remote NAS at a local mount point - just plain old cp

      The reason for that is that rsync has to work out what’s at the other end for each file, so it’s doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.

      (From memory, I think Raspberry Pi don’t handle large transfers over scp well… I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)

      Also, on a local network, there’s probably no point in using encryption or compression options - esp. for photos / videos / music… you’re just loading the CPU again to work out that it can’t compress any further.

    • Suburbanl3g3nd@lemmings.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      12 hours ago

      I couldn’t tell you if it’s the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there’s new data

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      yes, it’s the right way to go.

      rsync over ssh is the best, and works as long as rsync is installed on both systems.

      • qjkxbmwvz@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like rsh, but I would only do this if the computers were directly connected to each other by one Ethernet cable.