Been studying RISC-V for… I think a year now. Bought the booklet outlining the ISA’s modules, and have been working down from there.

I have seen various startups and actual products, as well as a bunch of simulators, but I haven’t really seen any projects trying to design a RISC-V CPU from the ground up.

Are there any groups doing this? I don’t think I’m at a point where I could meaningfully contribute, I’m mostly interested for educating myself.

    • fubarx
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      For the longest time, if you needed a CPU, your choices were basically IBM, Intel, DEC, or Motorola (ignoring small, embedded systems). Then some academic papers came out on the value of a ‘reduced’ instruction set. That led to Sun SPARC, IBM POWER, MIPS, PowerPC (and a few other) processors, most of which eventually disappeared.

      Another group called ARM came along and offered not just an alternate reduced instruction set, but also the baseline code needed to implement all that. This way, you could cobble together your own CPU for exactly what features you needed (memory, disk, networking, GPU, etc). Having a baseline sped up development a lot, but you had to license that stack and pay ARM royalties.

      That hummed along quietly until Apple and NVidia decided to create their own ARM-based chips. All of a sudden, ARM became known as a beefy, power-efficient option for phones, desktops, laptops, and servers.

      In 2010, a bunch of academics dusted off the old RISC papers and came up with RISC-V. Companies were started to follow the same model as ARM: modules you could cobble together to make a custom processor. Except all of it was open-source and you didn’t have to pay the ARM license fee.

      AI/ML processors are now the new thing. The big race is between Intel, ARM-based processors, and RISC backers to see who can come up with integrated, power-efficient AI processing features and quickly roll them out to customers. That world is divided between beefy processors used for training in data centers, and small, efficient ones used for running inference at the edge (ie phones, cars, gateways, etc).

      We are now in the early stages of this period.