I feel like I’ve been hearing about AMDs “next” CPU having dozens of cores on a bunch of chiplets for the last few generations, then the main gaming consumer parts end up with 6 or 8 or something.
The 7950 has 16 cores. I think what the article is suggesting is the very top of the line in the next gen could go potentially double, up to 32. I would imagine if that happened though that the more midline ones would still be in the 12-16 core range. I guess we’ll see when they come out though.
Yea here’s hoping. I’m skipping the 7000 series parts and sticking with my 5800x3d, I really want a higher core part that still has all the single ccd x3d advantages, since I game and do CPU heavy work on the same rig.
Same here. 5800x3d is great, and I’d rather not buy a new motherboard and things just yet.
Yes, no desire for all the things that will have to come with this upgrade. I want a huge boost, so sitting out this first wave.
900 series have 16 cores going back to 1950
Most games can’t take advantage of more than a couple cores anyway, and the high-core-count CPUS often sacrifice a little clock speed.
The optimal gaming CPU is like 4-8 cores but with a high clock speed. The 32+ core machines are for compute heavy tasks like CAD or running simulations. Sometimes compilers.
I thought they were already up there on Threadrippers, or am I misunderstanding and that’s either not counting as a CPU or not a single die?
Threadripper 7000 went up to 64 cores with 8 dies (excluding IO die) , so 8 cores per die.
I’d kill for a single CCD 16 core x3d part. The 7950x3d is tempting with it’s 3d CCD and high clock speed CCD, but since not every game/program knows how to use it properly you end up with hit or miss performance.
Honestly with the 7950x3D being so powerful, you rarely notice it if a game isn’t fully utilizing it. I have one and I’m very pleased with it!
My biggest concern from what I’ve seen is that the weird hack AMD uses to get programs to run on one set of cores vs the other wasn’t exactly great last I looked and can cause issues when a game tries to move off of one CCD onto the other. That said I haven’t looked into this ever since the CPU first came out so hopefully things are better now.
How observant are you to micro stutters in a game? That was the biggest reason I got the 5800x3d in the first place, but now that I have a better GPU I can tell that thing struggles. And from what I remember most of the issued you’d have moving from CCD to CCD were more micro stutters vs normal frame rate dips or just lower average frame rates.
I only really notice stutters in heavily modded Minecraft, where it’s clearly linked to the garbage collector. In more demanding games I don’t notice any stuttering really. Or at least, none that I can’t easily link to something triggering in the game that is likely causing it.
Sure, perhaps I have slightly lower average FPS compared to a 7800x3D, but I also use this PC for productivity reasons so there the extra oompf really does help. Still, 97% of a framerate that’s already way higher than what my 144Hz monitors support is still well above what my monitors support. I don’t think the performance difference is really noticeable, other than in certain benchmarks or if you try really hard to see them.
It’s considerably faster than a 5800x3D though.
I’m also wondering why there is even a difference in FPS in higher class CPUs - shouldn’t it be the GPU bottlenecking, especially in 4k high settings?
1% and 0.1% lows will almost always be CPU bound as it loads more in. Well assuming it’s not vram limiting you. Games are pretty CPU intensive these days since the PS5 and Xbox no longer have potato CPUs. At 120+ fps I regularly see >50% CPU usage in most games. And that’s with nothing running in the background. In the real world you have a ton of background tasks, YouTube videos, discord etc eating your CPU.
Also the 4090 is an absolute beast. My 5800X3D absolutely holds my 4090 back pretty often honestly.
Removed by mod
Doesn’t c stand for e-cores? Packing up to 32 e-cores must be easier than with normal cores.
Also kinda wish they went the other direction a little, cut cure counts and put more cache across all levels on some cores instead for better single thread performance, a ‘very big’ core so to say. Intel’s cache sizes have been larger then amd since alder lake and there stayed competitive despite their process node disadvantage
Not quite an e-core but the goal is the same: Make more efficient use of the available die space by packing in more, slower cores.
The difference is that Intel’s e-cores achieve this by having a different architecture and support less features than their p-cores. E-cores for example do not support multi threading. E-cores are about 1/4 the size of a o-core.
AMD’s 4c cores support the same features and have the same IPC as full zen 4 cores but operate at a lower clock speed. This reduces thermal output of the core, allowing them to pack in the circuitry much more densely.
Undoubtedly Intel’s e-cores take advantage of this effect as well and they are in fact quite a bit smaller than 4c: a 4c core is about 1/2 the size of a zen 4 core. The advantage of AMD’s approach is that having the cores be the same simplifies the software side of things.
Is there really a need for them?
The c variants of zen are for cloud and are more compact variants of the full zen 5 cores, they generally want as many cores in as compact a format as possible.
We might see 5c show up in SoCs (like the chip in a hypothetical steam deck 2) as well because they want their chips to be as small as possible so they can price their devices as competitively as possible. I don’t think we will see those go up to 32 cores however as there is indeed no need for that many cores on consumer chips.
Cool. When’s the ARM chip coming out?
Arm is dead. The future is RISC-V
It should. An open technology standard should gain traction over closed proprietary ones.
The far future, maybe
ARM isn’t present either.
Aren’t the new Apple chips ARM? If they are, then ARM is absolutely in the present, and proven viable for consumers by Apple.
It proves Apple is viable for consumers not ARM.
The Windows and Linux drivers for arm are severely lacking compared to MacOS Rosetta.
ARM is further in the development stage than RISC5 but both aren’t near X86 for desktop compatibility yet.
If apple is viable for consumers, and apple uses ARM, then ARM is viable for consumers.
Windows and Linux being unfortunately behind is not an argument against ARM being viable, it shows it’s not ready - however, apple was in the same situation before they moved to ARM, so theoretically Microsoft could attempt a similar investment and push towards ARM. Apple’s control over both hardware and software certainly helped them, and went well for them.
That said, maybe it’s a disagreement on terminology. When I say that ARM is viable, I mean that it’s ready to create hardware and software that does what people need it to do. Apple clearly succeeded, now it’s a question of if/when manufacturers start making open hardware and software starts compatibility… Or if maybe another option will succeed instead.
RISCV is also viable to create hardware and software to do what people need.
The software just doesn’t exist yet.
I wonder if/how that will affect temps?
Did you really copyright your comment 😭
Did you really copyright your comment 😭
No, I licensed my content with a limited license that does not allow for commercial usage.
Probably negatively, but also likey not enough to matter. CPUs these days run pretty cool.
Were a long way from the days of an idle Pentium 4 at 75C
We’re in the days of Intel’s top chips degrading themselves in a matter of weeks due to thermals being simply unmanageable under anything less than a beefy 360mm AIO or custom loop cooling at stock settings
CPUs these days run pretty cool.
Thought the AMD CPU ran around 90 celsius?
My Ryzen 3900X idles at around 50C, although that’s a few generations ago now
My 3900X idles at 35 and hits 65 when it’s 100% all cores. With a decent cooler modern AMD runs pretty chill
My Ryzen 3900X idles at around 50C, although that’s a few generations ago now
There seems to be a big difference between older CPUs and the newer ones, where the newer ones are running a lot hotter now under load.
I personally use a 5800X and it gets to 90c often.