- cross-posted to:
- hackernews@lemmy.smeargle.fans
- hackernews@derp.foo
- cross-posted to:
- hackernews@lemmy.smeargle.fans
- hackernews@derp.foo
“…an entirely new way to generate light. It’s a remarkably complex process that involves hitting molten tin droplets in midflight with a powerful CO2 laser. The laser vaporizes the tin into a plasma, emitting a spectrum of photonic energy. From this spectrum, the EUV optics harvest the required 13.5-nm wavelength and direct it through a series of mirrors before it is reflected off a patterned mask to project that pattern onto the wafer.”
That, is insane.
I’m curious how many droplets it takes to do a single mask. That doesn’t sound like it would be eat-away-the-surroundings bright.
My favorite part about the microchip production line is that it all depends on one company (ASML) in the Netherlands and their R&D. They make double digit quantities of EUV machine and that’s it: they dictate the entirety of “easy” technological speed advances in computing.
And then they ship to a micropseudonation being threatened by the most powerful Eastern country just thousands of kilometers away. That’s where the chips are actually produced.
And this entire process is predicated on quantum physicists banging together light waves that literally turn chip design into a probabilistically modeled engineering problem.
What fun!
Shoutouts to Asianometry for having the best videos on all sorts of the chip design process. He covers a ton of other stuff but his interests just about align with mine so I’m a huge fan.
Also, the optics needed for ASML to manufacture their machines is only made by the Swiss company Zeiss.
Precision manufacturing is not ubiquitous. China made their first independently manufactured ballpoint pen in 2015.
Zeiss is German, they also produce substantially more than just the optics https://en.m.wikipedia.org/wiki/Carl_Zeiss_SMT
China is already working on the software part of chip design, and they have domestic capability to produce 14nm chips (SMIC) that is using ASML machines though. (But the US cannot take them away, only stop them buying more). I’m sure they are considering building their own fabrication machines. I fully believe China is capable of domesticating the whole fabrication process given some time. China was able to design and build their own domestic computer in the cold war, a few years after the US, but taking less total time than the US did. China is the world’s leading productive superpower, and what I think is most important, is they have a culture of teamwork, rather than competition, to drive innovation.
You can’t keep on track what is long over. At most you can return to Moore scaling, for a short while, but this is not telling you how exactly.
You have to interpret “on track” as “some growth” for this to make sense. At the end of the day finer features helps, but the technology of integrated semiconductor chips itself is reaching it’s final form.
Fascinating. I’m truly excited to see how much more efficient in energy consumption these chips will be. I was blown away by the leap forward in battery life M1 was capable of at launch. If we can start to bring those efficiency gains to data centres we can start to crunch numbers on serious problems like climate change.
M1 gets most of its performance-per-watt efficiency by running much farther down the voltage curve than Intel or AMD usually tune their silicon for, and having a really wide core design to take advantage of the extra instruction-level parallelism that can be extracted from the ARM instruction set relative to x86. It’s a great design, but the relatively minor gains from M1 to M2 suggest that there’s not that much more in terms of optimization available in the architecture, and the x86 manufacturers have been able to close a big chunk of the gap in their own subsequent products by increasing their own IPC with things like extra cache and better branch prediction, while also ramping down power targets to put their competing thin-and-light laptop parts in better parts of the power curve, where they’re not hitting diminishing performance returns.
The really dismal truth of the matter is that semiconductor fabrication is reaching a point of maturity in its development, and there aren’t any more huge gains to be made in transistor density in silicon. ASML is pouring in Herculean effort to reduce feature sizes at a much lower rate than in years past, and each step forward increases cost and complexity by eyewatering amounts. We’re reaching the physical limits of silicon now, and if there’s going to be another big, sustained leap forward in performance, efficient, or density, it’s probably going to have to come in the form of a new semiconductor material with more advantageous quantum behavior.
Is there anything looking even remotely promising to replace silicon? Manufacturing base aside, what’s the most like candidate so far?
Manufacturing is actually the name of the game with chip design. Even if a quantum computing design becomes feasible, the exotic nature of its construction will turn any discovery into a engineering nightmare.
As for the type of technology, here’s what a competitor looking for the first blue LED said about the Nobel Prize winners: “It’s like I say to people: they had been working on the steam engine for 100 years, but they never could make one that really worked, until James Watt showed up. It’s the guy who makes it really work who deserves the Nobel Prize. They certainly deserve it.”
Not really: you have to keep in mind the amount of expertise and ressources that already went into silicon, as well as the geopolitics and sheer availability of silicon. The closest currently available competitor is probably gallium arsenide. That has a couple of disadvantages compared to silicon
- It’s more expensive (both due to economies of scale and the fact that silicon is just much more abundant in general)
- GaAs crystals are less stable, leading to smaller boules.
- GaAs is a worse thermal conductor
- GaAs has no native “oxide” (compare to SiO₂) which can be directly used as an insulator
- GaAs mobilities are worse (Si is 500 vs GaAs 400), which means P channel FETs are naturally slower in GaAs, which makes CMOS structures impossible
- GaAs is not a pure element, which means you get into trouble with mixing the elements
You usually see GaAs combined with germanium substrates for solar panels, but rarely independently of that (GaAs is simply bad for logic circuits).
In short: It’s not really useful for logic gates.
Germanium itself is another potential candidate, especially since it can be alloyed with silicon which makes it interesting from an integration point-of-view.
SiGe is very interesting from a logic POV considering its high forward and low reverse gain, which makes it interesting for low-current high-frequency applications. Since you naturally have heterojunctions which allow you to tune the band-gap (on the other hand you get the same problem as in GaAs: it’s not a pure element so you need to tune the band-gap).
One problem specifically for mosfets is the fact that you don’t get stable silicon-germanium oxides, which means you can’t use the established silicon-on-insulator techniques.
Cost is also a limiting factor: before even starting to grow crystals you have the pure material cost, which is roughly $10/kg for silicon, and $800/ kg for germanium.
That’s why, despite the fact that the early semiconductors all relied on germanium, germanium based systems never really became practical: It’s harder to do mass production, and even if you can start mass production it will be very expensive (that’s why if you do see germanium based tech, it’s usually in low-production runs for high cost specialised components)There’s some research going on in commercialising these techniques but that’s still years away.
Easier question: What behavior exactly would allow for better ICs? The story you read in popsci is about quantum behavior showing up at feature-scale, which seems like it should be only somewhat effected by material choice.
Commenting to come back later and read everything