• @AgreeableLandscapeOP
    link
    4
    edit-2
    1 year ago

    I guess the only devil’s advocate there is that to achieve higher memory bandwidths, you do actually need the RAM to be as close to the processor as possible. It’s why Apple’s M1 and M2 processors have higher real-world memory speeds than Intel processors because they moved the RAM to literally on the CPU package itself, as much as I don’t like Apple. You also have things like HBM where the RAM is just a bare silicon die right next to the processor die, with a silicon interposer underneath connecting both that can do much higher signalling rates than any circuit board.

    But, to say that this validates never being able to upgrade the RAM is still a stupid argument. There are ways to have your cake and eat it too. Dell recently came up with their CAMM modules which have a physical design that places the RAM chips much closer to the processor than SODIMM slots, and there are theoretically even faster module concepts like having a socketed RAM module on the bottom of the board directly opposite to the processor. Or, you could have a compute module where maybe the processor and RAM is one piece, but you can change the CPU and RAM combo while keeping the rest of the device. Or, how about this, a compute module or CPU package with onboard high speed memory or HBM, and lower speed slots that allow for expansion? Modern memory controllers are definitely smart enough to map memory in a way where the things that are accessed the most or require the highest bandwidth would be moved to the faster memory. Individual compression sockets for modern RAM chips like we have for the CPU, like your suggestion, either on the chip itself or extremely close to it, would definitely be harder to pull off due to the much tighter tolerances of modern day memory, but definitely not impossible.