From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.

  • JaggedRobotPubes@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    At first glance the major takeaway here might be that AI can do gish-gallop but with the truth instead of lies.

    And it doesn’t get exhausted with somebody’s bad faith bullshit.

  • LucidBoi@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    76
    arrow-down
    2
    ·
    2 days ago

    Another way of looking at it: “AI successfully used to manipulate people’s opinions on certain topics.” If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      44
      ·
      2 days ago

      Anything can be used to make people believe them. That’s not new or a challenge.

      I’m genuinely surprised that removing such beliefs is feasible at all though.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        3
        ·
        2 days ago

        If they’re gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          That doesn’t follow with the “if you didnt reason your way into a believe you can’t reason your way out” line. Considering religious ferver I’m more inclined to believe this line than yours.

          • Azzu@lemm.ee
            link
            fedilink
            arrow-up
            4
            ·
            1 day ago

            No one said at all that AI used “reason” to talk people out of a conspiracy theory. In fact I would assume it’s incredibly unlikely since AI in general is not reasonable.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    2 days ago

    The researchers think a deep understanding of a given theory is vital to tackling errant beliefs. “Canned” debunking attempts, they argue, are too broad to address “the specific evidence accepted by the believer,” which means they often fail. Because large language models like GPT-4 Turbo can quickly reference web-based material related to a particular belief or piece of “evidence,” they mimic an expert in that specific belief; in short, they become a more effective conversation partner and debunker than can be found at your Thanksgiving dinner table or heated Discord chat with friends.

    This is great news. The emotional labor needed to talk these people down is emotionally and mentally damaging. Offloading it to software is a great use of the technology that has real value.

  • kn0wmad1c@programming.dev
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    2 days ago

    More like LLMs are just another type of propaganda. The only thing that can effectively retool conspiracy thinkers is a better education with a focus on developing critical thinking skills.

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    2 days ago

    All of this can be mitigated much more by ensuring each citizen has a decent education by modern standards. Turns out most of our problems can be fixed by helping each other.

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    2 days ago

    “Great! Billy doesn’t believe 9/11 was an inside job, but now the AI made him believe Bush was actually president in 1942 and that Obama was never president.”

    In all seriousness I think an “unbiased” AI might be one of the few ways to reach people about this stuff because any Joe schmoe is just viewed as “believing what they want you to believe!” when they try to confront any conspiracy.

    • livestreamedcollapse
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      With the inherent biases present in any LLM training model, the issue of hallucinations that you’ve brought up, alongside the cost of running an LLM at scale being prohibitive to anyone besides private-state partnerships, do you think that will allay conspiracists’ valid concerns about the centralization of information access, a la the reduction in quality google search results over the past decade and a half?

      • Asafum@feddit.nl
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        I think those people might not, but I was once a “conspiracy nut,” had a circle of friends who were as well, and know that for a lot of those kinds of people YouTube is the majority of the “research” they do. For those people I think this could work as long as it’s not hallucinating and can point to proper sources.