• kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      We have no pathway to AGI yet. The “sparks of AGI” hype about LLMs is like trying to get to the Moon by building a bigger ladder.

      Far better chance that someone in the Pentagon gets overconfident in the capabilities of unintelligent ML and hooks a glorified chatbot into NORAD and triggers another missile minuteman crisis that goes the wrong way this time because the order looks too confident to be a false positive.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I never said I thought we would get to ASI through LLMs. But we still have a good change of getting there soon.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?

      And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.