• simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    9 months ago

    Claims don’t make any sense if there isn’t any benchmark.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      17
      arrow-down
      2
      ·
      9 months ago

      It’s called Efficient Computer. That increases the veracity of the efficiency claims by at least three thousand.

    • auth
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      so you think they claimed that and didn’t do any testing?

  • mindlight@lemm.ee
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    9 months ago

    So Intel, Apple, every other company that develops ARM based processors, AMD and Nvidia has just missed this technology ?

    We’re talking about trillions of dollars in just R’n’D investments and this technology just flew under the radar?

    If it sounds too good to be true, it is probably too good to be true.

    • Faceman🇦🇺@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      9 months ago

      Usually means “yes this works in theory but only for very specific operations at limited scales that aren’t all that important so it’s not worth pursuing seriously”

    • caseyweederman@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      9 months ago

      I mean
      Big companies tend to “innovate” by buying market-disrupting startups and squashing the life out of them so they wouldn’t need to compete

    • Nomecks@lemmy.ca
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      9 months ago

      It probably runs a completely custom instruction set which makes it incompatible with current architectures. Current manufacturers are designing chips that are operable with popular instruction sets.

  • Faceman🇦🇺@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    23
    ·
    9 months ago

    I mean, we know the absolute limits of computational efficiency thanks to the Landauer limit and the Margolus–Levitin theorem, and from those we know that we are so far from the limits that it is practically unfathomable.

    If they can show some evidence that they can perform useful calculations 100x more efficiently than whatever they chose to compare against (definitely a cherry picked comparison) then I’ll give them my attention, but others have made similar claims in the past then turned out to be in extremely specific algorithms that use quantum calculations that are of course slower and less efficient on any traditional computer.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      9 months ago

      I’d like to see these chips benchmarked in the wild as well before getting too excited, but the claims aren’t that implausible. Incidentally, this approach is why M series chips are so much faster than x86 ones. Apple uses SoC architecture which eliminates the need for the bus, and they process independent instructions in parallel on multiple cores. And they’re just building that on existing ARM architecture. So, it’s not implausible that a chip and a compiler designed for this sort of parallelism from ground up could see a huge performance boost.

      • krolden
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        Thats not why Apple silicon is faster. Every modern mobile device uses a SoC these days.

          • krolden
            link
            fedilink
            arrow-up
            3
            ·
            9 months ago

            Sorry I thought you meant its more efficient just because its a SoC.

  • firefly@neon.nightbulb.net
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    They’ve been promising quantum computers for three decades with zilch results. I’ve lost count of how many times and how many startups and even major market players claimed to have working quantum computers, which of course to this day are all just smoke and mirrors.

    They’ve been promising artificial intelligence for three decades with zilch results. Then they redefined what AI means to get venture capital pointing the money hose at it. Now people think a glorified autocomplete and grammar engine is ‘artificial intelligence.’

    I’ll believe it when I see it.

    • SuiXi3D@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Efficient, not fast. Just means it’ll sip power as opposed to guzzling it.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      9 months ago

      The article says that this architecture uses significantly less power which would mean producing less heat as well.