I want to build a proper server with room for 40+ HDDs to move my media server to and have RAID 1. I know a lot about PCs and software, but when it comes to server hardware I have no clue what I’m doing. How would I go about building a server that has access to 40+ RAID 1’d HDDs?

  • AlternateRoute@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    7 months ago

    40 drives ? Why that is a huge amount of power , what is your space target

    RAID 1 ? With 40 drives ? That would be absolutely stupid you want to use RAID 6 or 10 so you don’t waist 50 % of your space with RAID 1. Or some other N+2 disk redundancy.

    Have you considered how much power such a large setup will need?

    • GasMaskedLunatic@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      I’ll have to watch a video on it later, I assumed having a 1:1 backup was the most efficient backup method possible without compression. I don’t plan on utilizing every drive at once, and I don’t plan on having more than 20 to start with, but it won’t be much more than I already have, so I should be okay to start. I just want to make sure there room for expansion in the future. I don’t need all 40 immediately. My UPS will tell me how much power I’m drawing, right?

      • carzian
        link
        fedilink
        arrow-up
        13
        ·
        7 months ago

        You need to research raid 1,6,10 and zfs first. Make an informed decision and go from there. You’re basing the number of drives off of (uninformed) assumptions and that’s going to drive all of your decisions the wrong way. Start with figuring out your target storage amount and how many drive failures you can tolerate.

          • carzian
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            That’s definitely something to be aware of, but the vdev expansion feature was mergered and will be released probably this year.

            Additionally, it looks like the authors main gripe is the current way to expand is to add more vdevs. If you plan this out ahead of time then adding more vdevs incrementally isn’t an issue, you just need to buy enough drives for a vdev. In homelab use this might an issue, but if OP is planning on a 40 drive setup then needing to buy drives in groups of 2-3 instead of individually shouldn’t be a huge deal.

            • chiisana@lemmy.chiisana.net
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              I think the biggest issue home users will run into (until the finally merged PR gets released later this year) is that as they acquire more drives, compared to a traditional RAID cluster that they could expand, they’re going to see more and more drives proportions being used for parity. Once vdev expansion is possible, the system would be a lot more approachable for home users who doesn’t acquire all the drives up front.

              Having said that, this is probably a lot less of a concern for someone intending to setup 40 drives in RAID1, as they’re already ready to use half of it for redundancy…

  • deegeese@sopuli.xyz
    link
    fedilink
    arrow-up
    9
    ·
    7 months ago

    40-drive RAID is moving out of homelab territory and pretty deep into enterprise storage systems.

    Do you already have these 40 drives or are you spacing out a new NAS from scratch?

    If it’s from scratch I’d first see if I could get it down to 20 or 24 larger drives to allow the whole thing to fit in a single 4U rackmount case.

    Bigger than that and you’re probably stuck with proprietary NAS hardware to link together multiple racks.

    • GasMaskedLunatic@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      With 140TB+ of existing data, I would need 16 18TB HDDs to have RAID 1, and I also need the ability to expand. Really, I just need to have all the data accessible over the network so I can manage it from my main PC and stream it via a Plex/Jellyfin server. Maybe 4 smaller DIY NAS systems accessed by a separate system? I would really prefer no proprietary software if I can avoid it, and enterprise is out of my price range after I’ll be spending $3,000+ on HDDs.

      • GasMaskedLunatic@lemmy.dbzer0.comOP
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        The data is currently stored on external drives, but once I’ve got the new setup with RAID I’ll erase the drives and sell to friends, or use in other projects like an emulation station. 15+ wall plugs is excessive.

  • gm0n3y@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    With that many drives why raid 1? You could do a few raid z2 or z3 vdevs in truenas.

    • GasMaskedLunatic@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      7 months ago

      Doesn’t that keep more than two copies? RAID 1 is expensive enough as is. I just need to be able to pop in a replacement drive if one fails so I don’t lose data.

      • redditron_2000_4@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        The chance of a rebuild taking longer than a failure occurring is non zero. RAID 1 is less protection that raid 50 or raid 60, plus depending on how many disks you have you will save space and get better performance.

        But you need a good controller.

        • GasMaskedLunatic@lemmy.dbzer0.comOP
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          I’ll have to watch a video later I guess. I know it’s possible to lose a drive while it’s rebuilding, but it’s improbable, and I figured it takes more space, I’m okay taking that risk since I’m not handling irreplaceable data, just my personal TV/movie archive for now. But if it can save space, I’ll have to consider it.

          • computergeek125@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            It is far less improbable than you think, especially if all of your drives have similar age/wear - as would be the case if you bought all 40 around the same time.

  • computergeek125@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    Other have all mentioned various tech to help you out with this - Ceph, ZFS, RAID at 50/60, raid is not a backup, etc.

    40 drives is a massive amount. On my system, I have ~58TB (before filesystem overhead) comprised of a 48TB NAS (5x12TB@RAID-5) 42TB of USB backup disks for said NAS (RAID is not a backup), a 3-node vSAN array with 12TB (3x500GB cache, 6x2TB capacity) of all-flash storage at RF2 (so ~6TB usable, since each VM is in independent RAID-1), and a standalone host with ~4TB@RAID-5 (16 disks spread across 2 RAID-5 arrays, don’t have the numbers off hand)

    That’s 5+9+16=30 drives, and the whole rack takes up 950w including the switches, which iirc account for ~250-300w (I need to upgrade those to non-PoE versions to save on some juice). Each server on its own takes up 112-185w, as measured at iDRAC. It used to take up 1100w until I upgraded some of my older servers into newer ones with better power efficiency as my own build-out design principle.

    While you can just throw 40-60 drives in a 4u chassis (both Dell and 45drives/Storinator offer the ability to have this as a DAS or server), that thing will be GIGA heavy fully loaded. Make sure you have enough power (my rack has a dedicated circuit) and that you place the rack on a stable floor surface capable of withstanding hundreds of pounds on four wheels (I think I estimated my rack to be in the 300-500lbs class)

    You mentioned wanting to watch videos for knowledge - if you want anywhere to start, I’d like you to start by watching the series Linus Tech Tips did on their Petabyte Project’s many iterations as a case study for understanding what can go wrong when you have that many drives. Then look into the tech you can use to not make the same mistakes Linus did. Many very very good options are discussed in the other comments here, and I’ve already rambled on far too long.

    Other than that, I wish you the best of luck on your NAS journey, friend. Running this stuff can be fun and challenging, but in the end you have a sweet system if you pull it off. :) There’s just a few hurdles to cross since at the 140TB size, you’re basically in enterprise storage land with all the fun problems that come with scale up/out. May your storage be plentiful and your drives stay spinning.

  • Sims
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    40+ disks ?? I crammed a mini server with 7 old HDD’s totaling 2 tb and felt I was king of self-hosted! Now I feel small and peasant-like, damn you ;)

    I am a bit curios what the average amount of disks/totalsize is for other data hoard… self hosters in this sub ?

    • GasMaskedLunatic@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I only have 140TB atm, but I plan to expand. I want to backup all my Blu-rays, which will be at least 10 more HDDs. 2TBs used to be a lot. Lol

      • some_guy@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        7 months ago

        I have 300 blu-rays ripped full quality and that only takes 8.28TB. My entire media collection is just over 12TB. I have a friend who has a massively larger library and even he gets by with a NAS. My NAS only has ten disks and I have 84TB of storage. You’re over-estimating your needs.

      • Sims
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Ha, wild server :-) Yeah, I suddenly felt really old too ;-) Well I still have TWO whole free usb ports (2.0) on my tiny server, so I’m not upgrading yet !! 🙃

  • Krill@feddit.uk
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 months ago

    TrueNAS scale, LIS HBA cards, server motherboards with plenty of PCI Lanes ie AMD EPYC, large capacity HDD, just make sure they are CMR and not SMR drives. And Raidz2 is your friend. You will not need 40 drives, frankly you will fit 10,000 films, mostly 4K and 1000 complete TV series on a single 12 drive wide Raidz2 vDev using 20TB drives. That’s enough to last 50 years in terms of viewing time.

  • toikpi@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    My guesstimate is you have around 1,400 4K DVD rips. Do you need all of them?

    You probably should look at RAID 6 with a cold spare (i.e. a drive sitting alongside the server.

    ZFS allows you to create spare disks. ZFS spare disks are hot spares which are swapped in for faulty disks and swapped out when you replace the faulty disk.

    I suggest that you calculate the cost to build this server, you should allow for NAS specific drives rather than the cheapest desktop drives.

    You will need PCI to SATA cards to connect you drives.

    I suggest that you look at the NAS builds on PC Part Picker.

    Have a look at these pages

    https://www.wundertech.net/diy-nas-build-guide/ https://nascompares.com/guide/build-your-own-nas-in-2024-should-you-bother/ https://www.storagereview.com/review/how-to-build-a-diy-nas-with-truenas-core

    Finally check how much power and heat the server will produce. A server with that many drives will loud.

  • e0qdk@reddthat.com
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    If you really want a setup with that many disks, you might look into Ceph. It’s intended for handling stupidly huge amounts of data spread across multiple servers with self-healing and other nice features. (As the name suggests it’s a bit of a tentacle monster though.) One of my colleagues set up a deployment at work. It took a while for him to figure out how to get it running well but it’s been pretty useful.