• kbal@fedia.io
    link
    fedilink
    arrow-up
    37
    ·
    3 months ago

    I find myself suspecting that chatbots getting really good at talking people into believing whatever their operators want people to believe is going to start a lot more conspiracy theories than it ends.

      • IninewCrow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        People want to only believe what they want because they hate being wrong … and they never believe or want to believe that their side, their group, their community no matter what it is can ever be wrong.

        I’m not immune to it myself and I constantly have to remind myself that I can easily fall into that same mentality.

        Most of us are never taught to be self-critical or to properly question the world or the people around us.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      … I hope so anyway, because the obvious alternative of the chatbots remaining under the control of an elite few while everyone falls into the habit of believing whatever they say seems substantially worse.

      I guess the optimistic view would be to hope that a crowd of very persuasive bots participating in all kinds of media, presenting opinions that are just as misguided as the average human but much more charismatic and convincing, will all argue for different conflicting things leading to a golden age full of people who’ve learned that it’s necessary to think critically about whatever they see on the screen.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        3 months ago

        The interaction between society and technology continues to be borderline impossible to predict. I hope less true factually beliefs are still harder to defend, at least.

  • Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 months ago

    According to that research mentioned in the article, the answer is yes. The big caveats are

    • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
    • you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
    • Butterbee (She/Her)@beehaw.org
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      It’s not even fundamentally possible with the current LLMs. It’s like saying “Yes, it’s totally possible to do that! We just need to invent something that can do that first!”

      • Handles@leminal.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        I think we agree on the limited capability of (what is currently passed off as) “artificial intelligence”, yes.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 months ago

      that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

      You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.

      you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

      They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting then you’re not the target for this thing anyway.

  • Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    I have two main thoughts on this

    1. LLMs are not at this time reliable sources of factual information. The user may be getting something that was skimmed from factual information, but the output can often be incorrect since the machine can’t “understand” the information it’s outputting.

    2. This could potentially be an excellent way to do real research for people who were not provided research skills by their education. Conspiracy theorists often start off as curious but undisciplined before they fall into the identity aspects of the theories. If a machine using human-like language is able to report factual information quickly, reliably, and without judgement to those who wouldn’t be able to find that info on their own, this could actually be a very useful tool.