• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle





  • If the host you’re connecting to is already in your known_hosts, a malicious network can’t do anything but break the connection. If it tries to mitm the ssh connection, you’ll get the alert that’s someone could be “doing something nasty”.

    Information leakage: Anything between you and the ssh server will be able to see that you’re connecting to a ssh server and how much data you transfer, but not what the data actually is.




  • I tend to make full images of the disk with dd then mount them with kpartx and mount. This results in terabytes of disk images but I use ZFS compression and have plenty of space. It preserves everything about every file since it saves the fs itself and any deleted files that could be recovered with photorec etc.







  • Wingytodatahoarder3-2-1 Backup Rule
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Only ~400gb of it is absolutely critical at this point, but losing my 8.1tb of screen recordings would hurt. The screen recordings folder on its own grows on average 19gb per day. I prefer not to split it into smaller chunks because that increases maintenance workload in monitoring and keeping the backups working. My current backup strategy uses zfs send to copy incremental snapshots from the main host to the backup host, so maybe I could just get a secondary backup host and put it offsite?


  • Wingytodatahoarder3-2-1 Backup Rule
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    What’s the best way to make an offsite backup for 42tb at this point with 20mbps of bandwidth? It would take over 6 months to upload while maxing out my connection.

    Maybe I could sneakernet an initial backup then incrementally replicate?