The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back

  • ATQ@lemm.ee
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    10
    ·
    1 year ago

    Shit. I could have told them to just block lemmygrad for like $100 😂🤣😂

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    3
    ·
    1 year ago

    Just a reminder, LLMs are not designed to provide truth, but rather naturally sounding word generation.

    • tehmics@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      1 year ago

      We can certainly argue over what they’re designed to do, and I definitely agree that’s the goal of them. The reality though is that on some level it is impossible to separate assertions from the words that describe them. Language itself is designed to communicate ideas, you can’t really create language without also communicating ideas, otherwise every sentence from an LLM would just look like

      “Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like”

      They will readily cite information that was fed to them. Sometimes it is on point, sometimes not. That starts to be a bit of an ethical discussion on whether it is okay for them to paraphrase information they were fed, and without citing it as a source of the info.

      In a perfect world we should be able to expand a whole learning tree to trace back how the model pieced together each word and point of data it is citing, kind of like an advanced Wikipedia article. Then you could take the typical synopsis that the model provides and dig into it to judge for yourself if it’s accurate or not. From a research standpoint I view info you collect from a language model as a step down from a secondary source and we should be able to easily see how it gets to that info.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        LLMs are at least a quaternary(?) source. They’re scraping secondary/tertiary sources. As such they’re little better than asking passersby on the street. You might get a general idea of what the zeitgeist is, but how true any particular statement actually is will vary wildly.

        Math itself is designed to describe relationships between things. That doesn’t mean you can’t mock up a ‘reasonable seeming’ equation that is absolute nonsense after further examination, but that a layman will take as ‘true enough’.

        LLMs don’t cite things. They provide an approximation of what a human might write. They don’t know what they’re writing or how it relates to the ‘real world’ any more than any other centerpiece of a Chinese Room.

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    1 year ago

    After WWII in Germany, the cool young people knew you couldn’t trust anyone over 30.

    Nowadays, cool people need to understand that you can’t trust anything bland and sanitized-sounding on the internet. For the rest of our lives, your personhood is on trial with everything you say.

    It could tear society apart before we even know it’s happening.

    • etuomaala@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This was why I was so furious about Elon Mask’s blue checkmark debacle. He had a chance to prove that a gigantic part of the internet was a) human and b) non-duplicate. I was really shocked by how badly an apparently smart person fucked it up. Not so smart, it turns out.

    • Aa!@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Nowadays, cool people need to understand that you can’t trust anything bland and sanitized-sounding on the internet.

      This is bad news for my communication style.

  • zephyreks@lemmy.ca
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    24
    ·
    edit-2
    1 year ago

    Ah yes, American truths like “Iraq has WMDs and that’s why invading them is the fair and just thing to do,” “abortion is bad for human rights,” “the US isn’t collecting all of your internet traffic because that would be a violation of privacy,” and “this CIA-funded coup of a democratically-elected government will definitely help spread democracy around the world.”

    This researcher has built a pro-America AI disinformation machine for $400. I expect that, like most American media, it will start citing “independent think tanks” like Atlantic Council (which, coincidentally, is staffed mostly by ex-US intelligence and receives funding from US intelligence agencies) and use reports gathered by “independent sources” such as the US 4th PsyOps Airborne (which, per their recent recruiting videos, admits to orchestrating large-scale protests including Euromaidan, Tiananmen Square, and others).

    • mea_rah@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      1 year ago

      Have you seen any tweet this bot generated that would contain misinformation? Because I haven’t.

      What is the context for Iraq WMDs? I haven’t seen it anywhere in the article?

      • zephyreks@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        1 year ago

        Is anyone arguing that, at the time of the Iraq War, it wasn’t considered a “truth” in America that Iraq was developing WMDs and that anything to the contrary was considered disinformation?

        • mea_rah@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          1 year ago

          So is the bot not pointing out obvious lies with links to factual data or what is your point? Can you link me to an example of bot using shaky arguments?

          And the WMD claims stood on shaky legs from very beginning, many countries like Germany opposed use of force in Iraq. Perhaps we’d benefit from bot correcting false narratives in real time had this technology been available at the time.

          • zephyreks@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            The bot doesn’t know what’s “real” or not though - it’s a large language model, not a model of the real world. All it knows is what it’s been told in its training data.

    • mob@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      That’s way worse than I imagined. Like 400$ seems like to much money spent

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Russian criticism of the US is far from unusual, but CounterCloud’s material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation.

    Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools.

    In recent years, disinformation researchers have warned that AI language models could be used to craft highly personalized propaganda campaigns, and to power social media accounts that interact with users in sophisticated ways.

    Renee DiResta, technical research manager for the Stanford Internet Observatory, which tracks information campaigns, says the articles and journalist profiles generated as part of the CounterCloud project are fairly convincing.

    “In addition to government actors, social media management agencies and mercenaries who offer influence operations services will no doubt pick up these tools and incorporate them into their workflows,” DiResta says.

    The CEO of OpenAI, Sam Altman, said in a Tweet last month that he is concerned that his company’s artificial intelligence could be used to create tailored, automated disinformation on a massive scale.


    The original article contains 806 words, the summary contains 215 words. Saved 73%. I’m a bot and I’m open source!

  • JohnDClay@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    So is it against Russian disinformation, or is does it make anti Russia disinformation? I’d hope the former, it’s easy enough to refute Russia with correct information.

  • xkforce@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    1 year ago

    OpenAI is so concerned that AI will do x and y bad thing but still pour all these resources into developing it further.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      There are other endeavors where a great deal of the effort is put into making it safe. Space travel for example.

      I wish that was the case for AI development. AI safety is a notoriously underfunded, understaffed and still overall neglected field.

    • Touching_Grass@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      6
      ·
      1 year ago

      OpenAI isn’t responsible for what Russians do with it anymore than any company is for how users use their product

      • xkforce@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 year ago

        If someone knows that what they’re about to create is going to do harm like this, they shoulder some of the responsibility for those consequences. They dont just get to wash their hands of it as if they had no idea.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          1 year ago

          Why not. The people who are to blame are the people commuting the act.

          The thing itself has no ethical or moral impact until it’s used by a person. I think it feels good to blame an inventor but that’s scapegoating the real culprits. Only way I see your argument making sense is if they intended their tools to be user for unethical reasons.

          • xkforce@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Because people should consider the pros and cons of what they work on not just pretend that none of the responsibility for those cons is theirs. AI is one of the things that could wipe out humanity. Not in the terminator sense but through unparalleled distruption of the economy and by facilitating a wedge between people through the production of propaganda like none that weve ever seen. i.e deepfakes, personally tailored propaganda etc.

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              edit-2
              1 year ago

              to wipe out humanity

              Does it? Doesn’t that threat exist even without AI. At its current state its a glorified chatbot. Get rid of it, we still have every think tank filled with quants, statisticians, social scientists and marketing teams pushing all that propaganda. Its not AI doing it. Its humans.

              But AI does have potential to also develop new medicines. New materials. It has potential for a lot more good.

              It also has a lot of potential to give people some powerful pocket access to some basic services they normally wouldn’t have. Imagine an AI trained to help people sort out their finances. Act like an r/askdocs. Help with questions about new hobbies.

              So where you see panic, other people see hope. And it isn’t the inventors job to tell you or others how to use something.

              If we destroy ourselves with every bit of advancement then we deserve it. It would be an inevitability.

      • etuomaala@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        In that case, would you object to the posting of detailed schematics on the internet for the creation of nuclear weapons?

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The incentives to continue development are far too great; if one firm abandons the project, that just means that AI will be developed by a less ethical firm. This is why arguing that AI is bad in-and-of-itself is a moderately effective way to reduce the ethics of the average AI developer.

  • Wahots@pawb.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The Federal Election Commission has said it may limit the use of deepfakes in political ads.

    Any use of deepfakes should serve as immediate disqualification/termination for any political candidate, and any donations immediately reversed.

    • Maggoty@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s too easy. Run a Deepfake for the other guy. Instead dissolve whatever organization ran it. And if that’s someone’s campaign then so be it.

  • chaircat@lemdro.id
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    8
    ·
    1 year ago

    Honestly, if you look at it in a vacuum, this looks pretty similar to what the other side is doing.

    It’s a bot that draws from its own side’s narratives and pushes that line.

    Take away Russia from the picture and think about how often our media pushes a spin on other subjects that isn’t exactly the truth.

    Doesn’t look so much like “social media propaganda bots versus AI-driven bots arguing back” as much as propaganda bots on both sides spewing whatever their masters want us to see.

    • Cleverdawny@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      1 year ago

      You can’t take away Russia from the picture, because the fact that the bots are arguing against misinformation while using the truth is salient.

      • chaircat@lemdro.id
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Great, now take the same freedom fighter bots and tell them to argue IP policy on social media online. We can hear all about the right minded ways to think about intellectual property and how all the comments around here are misinformation.

        It’s like people lose their minds when you throw an enemy into the sentence. I don’t think these people crafting propaganda bots are heroes, even if they are on “my” team. Go down this road, and you can throw away forums like Lemmy, it’ll just be bots arguing with bots.

        • FuglyDuck@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Not to mention, it’s very probable there not on the side of truth, but rather more propaganda.

        • Cleverdawny@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 year ago

          Please quote me as to where I called bot programmers heroes

          For the record, I don’t particularly like bots of any kind. That being said, troll farms are obviously malicious and negative as well, and far more pernicious than bots designed to make counter arguments to those troll farms. Context matters, and if social media orgs - including Lemmy - can’t find a way to combat troll operations, then I don’t see the further harm in someone boring out the truth to combat vicious propagandists out to apologize for fascists.

          • etuomaala@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Hm. Yeah, banning bots would be better, but it would be more expensive than fighting troll farms with AI. That is valid. Still, I consider AI counter arguments a temporary solution.

            • Cleverdawny@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              The best solution is organized and effective administrative enforcement, but neither reddit nor Twitter or Facebook are interested in doing that, and lemmy is incapable of doing it even if they wanted to

                • Cleverdawny@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  1 year ago

                  Yes, that’s my point. There’s no capability within Lemmy to effectively screen out bad actors. It’s all dependent on volunteer admins, and when you’re trying to play whack a mole with malicious instances and people bouncing their accounts around between legitimate instances, it becomes basically impossible.

                  Not saying that the fediverse is a bad idea. I like it. But this is a key potential downside, and if lemmy and other fediverse clients become popular enough, we will see widespread botting, and it will be an issue.