So ive never really paid attention to the power I consume running various servers over the years but now that ive cleaned up and consolidated im trying to gauge my power draw compared to others.

I run a Proxmox host with 13 HDDs, 6 NVMe drives and 2 U2 NVME drives, a Quattro P2200, RTX A2000, RTX 4070, Epyc CPU, HBA for HDDs, NVMe Card 4x4.

A Synology 2422 with 4SSD, 2 HDDs

A Synology expansion with 8 HDDs

I run about 500 watts off the wall for all this stuff and I think this is the lower end as I wasn’t using the GPUs. That includes a couple switches as well. Very silent runs very cool.

What do other people consume?

  • Oscarcharliezulu@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    All these comments are making me think about how I’d create the minimum power-use homelab. Was looking at 3 year old servers but now I’m thinking just building a low power but powerful system that uses very low power at idle but when in use I’m less worried as it’s more about getting the job done.

  • ripnetuk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Have dropped from 500w (2 x R710) to 50-60w (5600X, 32Gb, 2 nvme drives, 3 sata SSDs, Coursir Platinum PSU, Gigabyte Mobo

    Plus in the lab, I have a ONT and a small network switch (replacing a managed one saved 20w or so), and a work laptop, which brings the at the wall consumption of the entire lab to around 80-90w

    Id be interested to see how folk with the Athlon processors are getting so much less power usage than me

  • mthode@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    looks like a steady 330, single storage host 7 spinning disks, 4 SSDs, 4 rpi-4 with SSDs running k3s and the network stack (edgerouter 8-xg, 2 8 port poe switches and a 24 port es-24).

    Changes I should make are to reduce drives / upgrade storage host in a couple of years and switch out to a single, larger poe switch (2.5G 24-48 ports), again in a couple of years.

  • SirLagz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    around 350W at the moment, but I’m in the middle of a data migration.

    i7-8700 whitebox VM host

    2x HP N54L with 4x 8TB SAS drives, one of them also has 2x 500GB SSD and 2x 500GB HDD

    TPlink 24 port switch

    a couple of UPSes

    Huawei LTE router

    probably some other stuff that I’ve forgotten

    Will likely be adding a Dell Optiplex mini PC soon

  • anothercorgi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    About 2.9e-7 gigawatts.

    for PVR (1 HDD), server(4 HDDs), and all those wall warts, standalone clocks, switches, CPE, battery chargers I left plugged in, TV and monitor standby power,…

  • EpicEpyc@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    ~550w Nexus 9k 48p 10g 6p 40g 3x dell r630, 2x 10c e5 2640 v4, 384gb ram, 1x 960gb nvme ssd and 5x 1.92tb sata SSDs

    Though it may change soon… not for the better

  • TheSoCalledExpert@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I draw about 150 watts at idle.

    1x pve server (ryzen 5, 32GB ram, 2x SSD, 8x HDD)

    1x HP T620+ firewall

    1x rpi2 backup pihole

    1x switch

    1x UniFi AP

    1x spectrum modem

  • TheIlluminate1992@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Running network equipment to include 4 POE cameras, a Unifi UDM Pro, 48 port poe switch, fans and 2 APs.

    On the Server side I run a dell r730xd with 2 x m1200s in standby as I don’t have disks yet and I pull about 300w on avg.

  • Firestarter321@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    850 watts is my normal server rack load, however, with cameras and other switches I’m at 1100 watts 24/7 currently.

    Add another 600 watts if I turn everything on in the server rack.

  • audioeptesicus@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    4100 VA or about 2650 W…

    Not including my office setup, that’s just what’s in the rack. MX7000 chassis with 7x MX740c blades, redundant 40G core switches, a fiber channel SAN, two 48-bay NAS with 10TB drives, and 240v power with a 5000W UPS.

    Not including the AC for the garage that the rack is in.

    And no, I am not a masochist.

    • JonohG47@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      How on Zod’s green earth were you able to get your power factor to be that awful?!

      • pseydtonne@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Follow up question: how is your hearing? An actual blade setup would be loud as bombs inside a house.

        • VaguelyInterdasting@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Actually, the MX7000 is not terrible on noise comparatively. Not silent, obviously, but no worse than a typical 1U server.

          Now, having that many compute modules may make that thing loud…

          • audioeptesicus@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Yep. It’s not so bad. I typically only have 4 or so blades powered on at a time, so it’s not so bad. The MX9116N IOMs I have though require more cooling. Had I gone for the lesser ones, it’d probably be a little quieter.

  • PermanentLiminality@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Dell T20, 2x Wyse 5070, Optiplex 3000 thin client. HP 600 g3 that total about 85 watts. A couple gigabit switches for about ten watts.

    Trying to keep it under a hundred watts, but I go well over the T20 and/or the HP have heavy load. Luckily none of my workloads use that much CPU so it’s under a hundred watts.

    I have crazy expensive California power so with A/C each watt costs about $4 a year.

  • Pepparkakan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    274W currently.

    But I have an Intel Arc A770 and 2 extra Samsung 980 Pro 2TB NVMe disks in an ASUS Hyper M.2 waiting to be installed when I get the time. I will be decommissioning a server when I do that though, so we’ll see what the running costs end up being. Probably slightly higher overall.

  • Timi7007@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Network stack: UDM-Pro, USW-Agg, USW-16-PoE, Raspberry Pi for DNS, VPN & monitoring, U-LTE-Pro, USW-Flex, G4-Bullet, 2x UAP-AC-Mesh, 2x UAP-AC-LR, 1x UAP-FlexHD sitting on a 500 VA UPS pulling ~100W.

    Homeserver (24/7) is a Ryzen 3700X-system with 13 HDDs usually pulling around 150W on it’s 1000VA UPS.

    Power is quite expensive here in Germany, but the cost of small solar-setups is dropping, so I might setup a little PV-installation to offload costs. Would probably allow me to run more servers again^^