• 351 Posts
  • 603 Comments
Joined 4 years ago
cake
Cake day: June 28th, 2020

help-circle











  • A free running cellular automaton (CA) approach in hardware would work, but each cell would be a much souped up SRAM cell, the interactions would be all local and 2D. Considering Cerebras is 40 G SRAM on the 300 mm WSI and is about at the cooling limit I’m afraid you do not have 5 orders of magnitude. Perhaps reversible spintronics can help with the power draw, but you still have to splat a higher dimensional network so not just local interactions into a 2D array.





  • They write

    “Of course, AMD is trying to get into the the AI training and inferencing game itself with the Instinct MI300 chip. And that, perhaps, is the main if modest cause for hope. If AMD can gain some traction in that huge market, it will not only be making lots of money, it will be in a position to do a similar thing to Nvidia and push some of that technology across into its gaming GPUs.”

    which strikes me as incorrect. AMD MI is pretty widespread in HPC. With margins lower in the consumer market it makes sense to focus on HPC.