I look forward to watching a Gamers Nexus review of this. I hope it’s as good as they say. 😀
Lead us to salvation tech jesus!
And he is, one review at a time.
Finally, now I can afford the 5800x3D.
I’m an antifan of Apple but the M4 Max is supposed to be faster than any x86 desktop CPU, and use a lot less power. That’s per geekbench 6. I’d be interested in seeing other measurements.
Geekbech is as useful as a metric as an umbrella on a fish. Also the M4 max will not consume less energy than the competition. That is a misconception arising from the lower skus in mobile devices. The laws of physics apply to everyone, at the same reticle size the energy consumption in nT worlkloads is equivalent. The great advantage of Apple is that they are usually a node ahead and the eschewing of legacy compatibility saves space and thus energy in the design that can be leveraged to reduce power consumption on idle or 1T. Case in point, Intel’s latest mobile CPUs.
Exactly, the apple chips excel at low power tasks and will consume basically nothing doing them. It’s also good for small bursty tasks, but for long lived intensive tasks it behaves basically the same as an equivalent x86 chip. People don’t seem to know that these chips can easily consume 80-90W of power when going full tilt.
The new Intel Arrow Lake is supposed to max out at 150W, but it doesn’t. And that’s still almost 40% better than previous gen Intel!
So hovering around 80-90W max is pretty modest by today’s standards.That’s impressive, or should I say scary? 150w is a lot of heat to dissipate… I hope those aren’t laptop chips…
The 14900k is an absolute oven
No but the M4 Max is claimed to be as fast, and Intel improved their chip, so it’s down from 250W for previous gen! And the M4 Max is faster.
Oh of course, the apple chips are faster, and this is likely a combination of more efficiency thanks to the newer process node and apple being able to optimize the chips and power draw much better because they make everything. Apple can also afford to use larger chips because they make a profit on the entire computer, not just the processor itself.
We’re condemned to suffer uninformed masses on this. Zen 5 mobile is on N4p at 143transistors/um2, the M4max is on N3E at 213transistors/um2. That’s a gigantic advantage in power savings and logic per mm2 of die. Granted, I don’t think the chiplet design will ever reach ARM levels of power gating but that’s a price I’m willing to pay to keep legacy compatibility and expandable RAM and storage. That IO die will always be problematic unless they integrate it in the SOC but I’d prefer if they don’t. (Integration also has power saving advantages, just look at Intel’s latest mobile foray)
Not to mention, Apple is able to afford the larger die size per chip since they do vertical integration and don’t have to worry about the cost of each chip in the way that Intel and AMD has to when they sell to device manufacturers.
The laws of physics apply to everyone
That is obviously true, but a ridiculous argument, there are plenty examples of systems performing better and using less power than the competition.
For years Intel chips used twice the power for similar performance compared to AMD Ryzen. And in the Buldozer days it was the same except the other way around.Arm has designed chips for efficiency for a decade before the first smartphones came out, and they’ve kept their eye on the ball the entire time since.
It’s no wonder Arm is way more energy efficient than X86, and Apple made by far the best Arm CPU when M1 arrived.The great advantage of Apple is that they are usually a node ahead
Yes that is an advantage, but so it is for the new Intel Arrow Lake compared to current Ryzen, yet Arrow Lake use more power for similar performance. Despite Arrow Lake is designed for efficiency.
It’s notable that Intel was unable to match Arm on power efficiency for an entire decade, even when Intel had the better production node. So it’s not just a matter of physics, it is also very much a matter of design. And Intel has never been able to match Arm on that. Arm still has the superior design for energy efficiency over X86, and AMD has the superior design over Intel.
Intel has had a node disadvantage regarding Zen since the 8700K… From then on the entire point is moot.
From then on the entire point is moot.
No it’s not, because the point is that design matters. When Ryzen came out originally, it was far more energy efficient than the Intel Skylake. And Intel had the node advantage.
https://www.techpowerup.com/review/intel-core-i7-8700k/16.html
https://www.techpowerup.com/cpu-specs/core-i7-6700k.c1825
Ryzen was not more efficient than skylake. In fact, the 1500x was actually consuming more energy in nT workloads than skylake while performing worse, which is consistent with what I wrote. What Ryzen was REALLY efficient at was being almost as fast as skylake for a fraction of the price.
https://www.notebookcheck.net/Apple-M3-Max-16-Core-Processor-Benchmarks-and-Specs.781712.0.html
Will you look at that, in nT workloads the M3 Max is actually less efficient than competitors like the ryzen 7k hs. The first N3 products had less than ideal yields so apple went with a less dense node thus losing the tech advantage for one generation. That can be seen in their laughable nT performance/watt. Design does matter however, and in 1T workloads Apple’s very wide design excells by performing very well while consuming lower energy, which is what I’ve been saying since this thread started.
Power consumption is not efficiency, PPW is.
Tell me you didn’t open the links without telling me you didn’t open the links. Have a nice day friend.
Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There’s also some wibbles like ARM insn decoding being inherently simpler but big picture that’s negligible.
Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn’t even bad, it just didn’t work out) while Intel is tanking hit after hit after hit.
I’d consider educating yourself more on this topic.
You made me chuckle.
Thank you for that.
Ah the reddit nostalgia ❤
what game can’t be ran by a 5800x3D ? if anything I feel like graphic cards are the biggest bottle neck right now
Simulators and games with mods can push the cpu. But yeah. Mostly gpu limited.
The gpu has been the gaming bottleneck for decades.
Yup. I have no trouble running modern games on my Ryzen 5600, which doesn’t even have the massive cache of the 3D chips. I’m not spending >$1k on a GPU, so my CPU is likely more than sufficient for quite a while.
Almoast any paradox game , except for maybe victoria 3.
1900’s to end date would like a word with you.
Escape from Tarkov. If you want 120+ fps on streets you pretty much need a 7800x3d.
Dragons Dogma 2 is notoriously CPU hungry.
5800X3D is my CPU for the next 3-5 years probs. Maybe even longer, it’s so damn good.
While the 9000 series looks decent, I honestly think Intel has a really interesting platform to build off of with the core ultra chips. It feels like Intel course correcting with poor decisions made for the 13th and 14th gen chips. Wendel from Level1 techs made a really good video about the good things Intel put into the chips while also highlighting some of the bad things, things like a built-in NPU and how they’re going to use that to pull in profiles for applications and games with ML, or the fact that performance variance occurs between chipset makers more often with the core ultra. It’s basically a step forwards in tech but a step backwards in price/performance.
Work at a tech store; the technicians that build the PCs for customers recently tried building with the new Core Ultra 7 256K. Two processors were dead or unstable right out of thr box. Tried with known good RAM, two different cpus on two different motherboards. It seems that Intel hasn’t really fixed their stability issue, which should be their first concern.
Well I didn’t say they were perfect.
So long as they stopped building the ram in and losing $16,000,000,000 in a fiscal year.
You guys are actually buying these processors? I’m still running a 4770 and a 1060.
I’m on a 4770k and GTX 980 as well but I’m really feeling the pain because all the newer games I want to play are CPU bottlenecked.
Helldovers runs like shit lol
I think I might be the only person who bought a 9950x on launch and was actually very happy with it. Not only it performs excellent, but unlike its predecessor, I can actually use it with air cooling, it’s a very efficient and powerful CPU.
Is 20% faster than intel a step up, generation on generation?
It’ll be a step up from the 7800x3d, but how much is a question. The 9000 series in general has been a disappointment in terms of the gains that were expected, but it does show some kind of gain. There’s reason to think those issues are fixable. Linux performance does show a decent uplift, for one, which has not been the case with Intel’s Arrow Lake chips.
the main benefit on the performance increase from zen4 to zen 5 is the reordering of the cache and chip layers allowed them to clock the cores higher, as one of the biggest bottlenecks for older x3d designs was clocks, due to the chip internally insulating a lot of the heat, so their clocks were stepped back from their non x3d counterparts.
the 9800x3d base and turbo clocks are a generous step up from previous gen, and likely the biggest contributing factor to the performamce increase when reviews drop.
Now that is a big boost