- cross-posted to:
- technology@lemmygrad.ml
- technology@hexbear.net
- cross-posted to:
- technology@lemmygrad.ml
- technology@hexbear.net
You must log in or # to comment.
As per usual, in order to understand what it means we need to see :
- performance benchmark (A100 level? H100? B100? GB200 setups?)
- energy consumption (A100 performance level and H100 lower watt? the other way around?)
- networking scalability (how many cards cards can be interconnected for distributed compute? NVLink equivalents?)
- software stack (e.g can it run CUDA and if not what alternatives can be used?)
- yield (how many die are usable, i.e. can it be commercially viable or is it R&D still?)
- price (which regardless of possible subsidies would come from yield)
- volume (how many cards can actually be bought, also dependent on yield)
Still interesting to read after announcements, as per usual, and especially who will actually manufacture them at scale (SMIC? TSMC?).
Am I missing something here? Did Nvidia leave a void?
Oh yes, companies knock themselves over trying to gobble up any AI chip in existence.
The US put export restrictions on Nvidia in China, so they can only sell older chips there.