AI researchers say they’ve found ‘virtually unlimited’ ways to bypass Bard and ChatGPT’s safety rules::The researchers found they could use jailbreaks they’d developed for open-source systems to target mainstream and closed AI systems.

  • R00bot@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    No, it means the AI is unable to actually think. It can’t recognise when it’s saying things it shouldn’t, because it can’t reason like we can. The AI developers have to put a bunch of gaurd rails on it to hopefully catch people breaking the system but they’ll never catch them all with such a manual system.

    • froh42@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I’m still not convinced we really are fundamentally different from such engines - still more complex maybe, so we’re harboring consciousness or an illusion of it - but in the end not so much different.

      Specifically the creativity discussion strikes me as mad, as I think also human creativity is just the reproduction of things our minds have taken in before, processed by the neuronal meat grinder.

      • Ryantific_theory@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        We aren’t, we just have a massively complex biological computing network that has a number of dedicated processing nodes refined by evolution to create a “smart” system. Part of why it’s so hard to make true AI is because the way brains process data is far messier than how computers function, and while we can simulate simple brains (nematodes and the like), it’s incredibly inefficient compared to how neurons actually handle processing.

        Essentially, we’re at the cave painting stage of creating intelligence, where you can kinda see what’s going on but they really aren’t that close to reality. To hit the point where an AI is self-aware is going to be 1) an ethical disaster, and 2) either an advancement in neuromorphic chips (adapting neural architecture to computer architecture) or abstracting neural computation via machine learning (ChatGPT - not actually copying how our minds work, but creating something that appears to function like our minds).

        There’s a whole lot of myths tied up around human consciousness, but ultimately every thought in our heads is the process of tens of billions of cells all doing their job. That said, I’m hoping AI is based off of human neural architecture, which produces sociopaths and monsters sure, but machine learning creating something that appears to think like a human but actually operates on arcane and eldritch logic before presenting a flawless replica of human thought unsettles me.