• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: October 27th, 2023

help-circle
  • It really depends. For like a desktop, I’d avoid unless it was really cheap as I’d basically nullifies the value of all non standard parts and I’d include things like cpu if the motherboard is nonstandard. So value basically becomes only like drives and such.

    For a server though, non standard is the norm and hete vendors even do stuff like vendorlocking instead which then IMO is a way bigger issue, especially since knowing beforehand if it does or not isn’t something anyone actually tells you before testing.


  • You’re using experimental drivers and force unmounting… And you actually have the gall to then try to pin the blame for errors from that on ntfs? Just no.

    ntfs does have many issues which is why ms is developing refs to replace it. But stability or corruption isn’t one of those issues. Ntfs is extremely solid in that regard due to the journaling.

    Ntfs drivers in linux are however very buggy and generally considered experimental and that you should not write to ntfs drives if there’s any data you care about as it could easily destroy all data there.

    If you need a common writable data area then use exfat, not ntfs.


  • I have 7 dual cpu servers so I might be a bit biased in this regard. But worthwhile is like entirely subjective. Robust is also a weird wordchoice since there’s multiple conflicting interpretations on that.

    For worthwhile… Well, it’s as I said subjective, but cost efficiency is very rarely the driving factor for homelabs.

    For robust, do you mean robust in the sense of more powerful? Then ofc a dual slot server will be more robust but then you again are back to worthwhile. If you mean robust in terms of stability. Then absolutely not. Multi socket servers are much less stable than single core. Not unstable by any stretch, but not AS stable. Every additional component you add will always add complexity, and most importantly, additional points of possible failures. While at the same time, the system can’t survive if one CPU dies, hence stability of the system is lower the more CPU sockets you have. That’s why dual and quad are so popular even if 8 slot and more actually existing and is denser which is important in datacenters. But after quad slot, you start getting actual issues of system stability that it’s usually better to sacrifice some density and go for more servers instead and blade centers are usually not THAT much lower density.



  • If stability is what you’re after (both in terms of versioning and in the sense of as few unscheduled reboots as possible), then neither is a good option. Both update quite often and go with an “introduce feature now, worry about stability later” and end up having to constantly patch a bunch of stuff.

    If you’re comfortable with a CLI, then I’d recommend Vyos and then going with the stable branch. It’s had 3 service patches since 1.3.0 released in 2021. The last being in june and before that, you have to go to september last year. Ofc, downside is that you’ll miss out on a lot of features. Like I don’t think stable has wireguard support yet, and not certain it will be ready for when 1.4 goes stable either (it’s currently in 1.4 rolling). You could implement some of it yourself because it’s built on Debian, but anything you do like that is tied to your current image. So if you upgrade, you have to do it again so I don’t recommend it.

    Point is, if you need features, don’t, but if it’s the most stable you’re after, I can highly recommend at least having a look. Though I always recommend getting a proper router above any router os on amd64. You’ll get more out of it, cheaper, with less power consumption and lower latency.


  • As in average? 1491W 30 day average according to the power meter. Fully loading everything is around 5kW iirc though that doesn’t really happen. Highest in last 30 days is 3774W peak and I think that’s when I accidentally shut down the UPS so everything was booting at the same time after. I don’t think I ever go over 3kW in normal circumstances.

    Using 5 storage servers, 2 of which are storinators and 3 supermicros. And then two compute nodes which are Proliant DL380, g10 and a g11 that I just bought last week. Plus ofc some network gear which isn’t really anything too fancy, it’s just two routers, which while they do do PoE, I don’t use it so they’re not really high power or anything.




  • Xcp is a lot stricter, for good or bad. For proxmox it’s fine to have networks on each host that looks roughly the same. For xcp, the network has to ne 1:1 equal to be in the same pool. This makes proxmox much easier to deal with administratively. But for xcp the upside is that there’s no guesswork involved, which is much more stable. Not that proxmox is unstable, but it’s not like it never crashes. I have yet to see xcp crash even once over several years I’ve messed with it.