We often talk about idle performance but in research, we often say performance for what?

Not very scientific but in this poll, here are some ground ‘rules’ please comment about your specific setup or other considerations I didn’t think of. Why not number of services? I think it’s not a good comparison because one service could be much bigger than another and often it doesn’t make a difference. I just started my kasm lxc and it idle, there was no change in power since I have so much idle CPU and it uses the host kernel.

  1. Raw online terabytes; zfs and raid reduce total storage but the setups too numberous to compensate for. If it’s a cold/hot spare it doesn’t count.
  2. Do include any storage running on a NAS or other devices, just like you would pcie disk controllers or system fans. The main point is that it is available to read/write to over your network or locally.
  3. Do include any system hardware that’s just there on idle. In my case I have two dGPUs doing different things, the smaller one of which is always active because of frigate…
  4. Do include your minimum idle services, whether it’s on a rack server, tower, or Synology container for example. Things like portainer, adguard, pinhole, proxmox backup server, immich. Don’t include, if you can spare the downtime, any VMs that you interact directly with (in my case a windows 11 gaming VM). Since we aren’t going for perfect accuracy z don’t worry about making your services busy or idle (like a queued yt-dlp download but all the background *are stuff counts).

Ok that’s it, looking forward to the poll results and discussion! It’d make the poll too complicated but it’d be interesting to see how professional server racks compare to small modular labs in economy of scale terms.

Oh, an older post sort of talks about this but not really. And the German lowest idle spreadsheet doesn’t really capture the raw TBs and has a different objective compared to real world use IMO.

https://www.reddit.com/r/homelab/s/VOjLBMudhB

View Poll

  • TheFeshy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I’m technically just over 5 W/TB usable at present, so that’s what I checked. Raw is 3.3 W/TB; but most of it is 5/8ths usable to raw.

    Setup is a small home ceph cluster, on older hardware. 4x Supermicro Fattwin nodes, with a single 2650V4 each (though they have a second socket that is empty), and up to 8 drives. These draw about 50W each, empty. 1X Supermicro 3U, with 2x2678v3’s, which draws about 150w empty, but holds 16 disks.

    The biggest thing driving down the power per TB is that the machines are only half full with disks. I’ve got 4 disks per node in the fattwins, and about half full on the 3U. This is because Ceph lets me add disks as I go - every time storage gets over 85% utilized, I pick up whatever disk is cheapest; usually 2nd hand enterprise disks (since a failure and replacement doesn’t cost me data, and not usually much time.)

    Fully loaded the power per TB would be easily twice as good. Newer hardware could also see some big gains, but the 3U is a machine I got almost 5 years ago, and the 4 nodes and the case they go in was such a deal that it’s still saving me money over other, less power-hungry options I could have tried.

    Wattage also includes the 10g network switch that Ceph really prefers.

    • DarkKnyt@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      10g copper? I hear that takes a lot of power.

      I’m in the same boat regarding older hardware. It wasn’t free but it was about the same cost as 2 or 4 mini PCs but has a lot of room for additions. I don’t think I’ll ever get a rack but full towers with expansion will probably be my approach for years to come.