Something i haven’t seen posted here yet, but worth say over and over again.

Murphy’s law says that anything that can go wrong will go wrong… but with the 3-2-1 strategy in place, your data always survives.

  • Wingy
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    What’s the best way to make an offsite backup for 42tb at this point with 20mbps of bandwidth? It would take over 6 months to upload while maxing out my connection.

    Maybe I could sneakernet an initial backup then incrementally replicate?

    • npastaSyn@kbin.socialOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Outside my depth but I’ll give it a stab. Identify what data is important, (is the full 42Tb needed?). Can the data be split into easier to handle chunks?

      If it is, then I personally do an initial sneakernet to get the fist set of data over. Then mirror different on a regular basis.

      • Wingy
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Only ~400gb of it is absolutely critical at this point, but losing my 8.1tb of screen recordings would hurt. The screen recordings folder on its own grows on average 19gb per day. I prefer not to split it into smaller chunks because that increases maintenance workload in monitoring and keeping the backups working. My current backup strategy uses zfs send to copy incremental snapshots from the main host to the backup host, so maybe I could just get a secondary backup host and put it offsite?