I can’t say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

  • krolden
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Any reason you went with a striped mirror instead of raidz5/6?

    • HTTP_404_NotFound@lemmyonline.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      The two ZFS pools are only 4 devices. One pool is spinning rust, the other is all NVMe.

      I don’t use raid 5 for large disks, and instead go for raid6/z2. Given z2 and striped mirrors both have 50% overhead with only 4 disks- striped mirrors has the advantage of being much faster, double the IOPs, and faster rebuilds. For these particular pools, performance was more important than overall disk space.

      However, before all of these disks were moved from TrueNAS to Unraid- there was a 8x8T Z2 pool, which worked exceptionally well.