• Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 months ago

    According to that research mentioned in the article, the answer is yes. The big caveats are

    • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
    • you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
    • Butterbee (She/Her)@beehaw.org
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      It’s not even fundamentally possible with the current LLMs. It’s like saying “Yes, it’s totally possible to do that! We just need to invent something that can do that first!”

      • Handles@leminal.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        I think we agree on the limited capability of (what is currently passed off as) “artificial intelligence”, yes.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 months ago

      that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

      You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.

      you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

      They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting then you’re not the target for this thing anyway.