A trend I have been seeing in the server space is a move away from socketed components, especially processors (the ship sailed long ago in the consumer space). A lot of the newer ARM solutions have the processor soldered, Intel also makes lower powered Xeon boards with soldered processors, and there’s more and more integrated server modules and “microservers” with mostly integrated components, which the marketing material focuses on the fact you can choose pre-built and optimized modules for a cluster to suit your application’s needs, instead of spending time choosing components. There are also proprietary and/or platform specific peripherals like network cards and storage controllers, AI accelerators, and even GPUs with Nivida’s proprietary form factor for their Tesla cards.

What do you think of this move, especially the extreme end with fully integrated microserver clusters where you can only replace or upgrade entire nodes? Wouldn’t this result in a lot more e-waste and cause more working components to be discarded, while being more expensive? I can’t help but think it would be beneficial for data center customers if components were individually upgradable. Is there a benefit to this I’m not seeing or is it just planned obsolescence reading its ugly head?