• addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    19
    ·
    12 days ago

    Well, yeah. The real advantage is only having a single file to transfer, makes eg. SFTP a lot less annoying at the command line.

    Lossless compression works by storing redundant information more efficiently. If you’ve got 50 GB in a directory, it’s going to be mostly pictures and videos, because that would be an incredible amount of text or source code. Those are already stored with lossy compression, so there’s just not much more you can squeeze out.

    I suppose you might have 50 GB of logs, especially if you’ve a logserver for your network? But most modern logging stores in a binary format, since it’s quicker to search and manipulate, and doesn’t use up such a crazy amount of disk space.

    • webhead@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      12 days ago

      I actually just switched a backup script of mine from using a tar.gz to just a regular tar file. It’s a little bigger but overall, the process is so much faster I don’t even care about the tiny extra bit of compression (100gb vs 120gb transferred over a 1gbit connection). The entire reason I do this is, like you said, transferring files over the Internet is a billion times faster as one file, BUT you don’t need the gzip step just for that

      • Björn@swg-empire.de
        link
        fedilink
        arrow-up
        2
        ·
        10 days ago

        You probably know of this already, but you might consider using Borg Backup instead. It only sends the changed bits which is even faster.

        • webhead@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          I’ll have to take a look. I was trying to keep it simple but I’m not opposed to giving that tool a shot. Thanks. :)