“To prevent disinformation from eroding democratic values worldwide, the U.S. must establish a global watermarking standard for text-based AI-generated content,” writes retired U.S. Army Col. Joe Buccino in an opinion piece for The Hill. While President Biden’s October executive order requires watermarking of AI-derived video and imagery, it offers no watermarking requirement for text-based content. “Text-based AI represents the greatest danger to election misinformation, as it can respond in real-time, creating the illusion of a real-time social media exchange,” writes Buccino. “Chatbots armed with large language models trained with reams of data represent a catastrophic risk to the integrity of elections and democratic norms.”

Joe Buccino is a retired U.S. Army colonel who serves as an A.I. research analyst with the U.S. Department of Defense Defense Innovation Board. He served as U.S. Central Command communications director from 2021 until September 2023. Here’s an excerpt from his report:

Watermarking text-based AI content involves embedding unique, identifiable information – a digital signature documenting the AI model used and the generation date – into the metadata generated text to indicate its artificial origin. Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content. This process gets complicated in instances where AI-generated text is manipulated slightly by the user. For example, a high school student may make minor modifications to a homework essay created through Chat-GPT4. These modifications may drop the digital signature from the document. However, that kind of scenario is not of great concern in the most troubling cases, where chatbots are let loose in massive numbers to accomplish their programmed tasks. Disinformation campaigns require such a large volume of them that it is no longer feasible to modify their output once released.

The U.S. should create a standard digital signature for text, then partner with the EU and China to lead the world in adopting this standard. Once such a global standard is established, the next step will follow – social media platforms adopting the metadata recognition software and publicly flagging AI-generated text. Social media giants are sure to respond to international pressure on this issue. The call for a global watermarking standard must navigate diverse international perspectives and regulatory frameworks. A global standard for watermarking AI-generated text ahead of 2024’s elections is ambitious – an undertaking that encompasses diplomatic and legislative complexities as well as technical challenges. A foundational step would involve the U.S. publicly accepting and advocating for a standard of marking and detection. This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

In 2024, generative AI and democratic elections are set to collide. Establishing a global watermarking standard for text-based generative AI content represents a commitment to upholding the integrity of democratic institutions. The U.S. has the opportunity to lead this initiative, setting a precedent for responsible AI use worldwide. The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.

Exerp credit: https://slashdot.org/story/423285

  • rodbiren@midwest.social
    link
    fedilink
    English
    arrow-up
    81
    ·
    11 months ago

    Good luck watermarking plaintext and locally run models. There is no good option. If you want certainty that you are dealing with a human you lose privacy. If you want privacy you cannot know where the plain text came from unless you sign each file cryptographically. Then you only know it came from a certain source, but there is no guarantee how that source made the text. Welcome to the new world.

    • tpihkal@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      11 months ago

      So what happens when we can’t trust everything we read on the Internet anymore?

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        42
        ·
        11 months ago

        Spoiler alert: we’ve never been able to trust everything we read on the internet.

        • Serinus@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          9
          ·
          11 months ago

          In relative terms we could.

          The amount of disinformation and propaganda is about to become obscene.

          • fishos@lemmy.world
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            11 months ago

            Except, no, you can’t. The whole “you eat seven spiders at night a year” was a rumor created specifically to show how easy is to start rumors. And how many times has that little gem been floating around the internet? Or how about how often you hear experts say that people talking about their given field on the Internet are flat out wrong, but they sound charismatic, so they get the upvote?

            The Internet is full of DATA. It’s always been up to you to parse that info and decide what’s credible and what’s not. The difference now is that the critical thinking required to even access the Internet is basically nil and now everyone is on there.

            • Serinus@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              I guess you don’t know what’s coming. Is there a lot of misinformation now? Certainly. But I’d say less than half the data is false.

              In the coming months you’re going to start seeing social media taken over by AI. You’re going to see pointed political “opinions” followed by several comments agreeing with the point being pushed. These are going to outnumber human comments.

              Currently, shills absolutely exist, but they’re far outnumbered by genuine people. That’s about to change. Money is going to buy public opinion on a whole new scale unless we learn to ignore anonymous social media.

              • fishos@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                11 months ago

                If you think that doesn’t already exist, you’ve been living under a rock. The Dead Internet Theory is pretty old at this point. I’m not saying you’re wrong, I’m saying that some of us have seen this trend coming long before AI was a buzzword and have been watching it already happen around us. I very much know what is coming because I’ve already watched it happen.

                • Serinus@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  11 months ago

                  Yeah, I mean 2015 was a big turning point, but this one should be bigger. It’s not black and white.

      • rodbiren@midwest.social
        link
        fedilink
        English
        arrow-up
        22
        ·
        11 months ago

        It’s not even about trust. It’s that I am confident I will have no clue who is a real life human being anymore soon. Autogenerated images, video, and text is practically in its infancy but already exists in the uncanny valley of being impossible to determine which is real and which is not. Imagine 5 years from now when perfectly lifelike high res video of practically anything you can imagine can be generated on the fly. Essentially the only thing I will have any certainty on is what I can witness in person. Or, if I have a circle of trust I can choose to believe content published by certain organizations or groups.

        It may actually push us away from tech and back to the community, which could be good assuming we survive the transition.

        • hai
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much — the wheel, New York, wars and so on — whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man — for precisely the same reasons.

          Looks pretty good to be a dolphin right now.

    • kibiz0r@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      11 months ago

      There are ways to watermark plaintext. But it’s relatively brittle, because it loses signal as the output is further modified, and you also need to know what specific LLM’s watermarks you’re looking for.

      So it’s not a great solution on its own, but it could be part of something more comprehensive.

      As for non-plaintext file formats…

      A simple signature would indeed give us a source but not method, but I think that’s probably 90% of what we care about when it comes to mass disinformation. If an article or an image is signed by Reuters, you can probably trust it. If it’s signed by OpenAI or Stability, you probably can’t. And if it’s not signed at all or signed by some rando, you should remain skeptical.

      But there are efforts like C2PA that include a log of how the asset was changed over time, providing a much more detailed explanation of what was done explicitly by humans vs. generative automated tools.

      I understand the concern about privacy, but it’s not like you have to use a format that supports proving that an image is legit. But if you want to prove that it is legit, then you have to provide something that grounds it in reality. It doesn’t have to be personally-identifying. It could just be a key baked into your digital camera (assuming that the resulting signature is strong enough that it’s computationally expensive to try to reverse-engineer the key and find who bought the camera).

      If you think about it, it’s kind of crazy that we’ve made it this far with a trust model that’s no more sophisticated than “I can tell from the pixels and from seeing quite a few shops in my time”.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      11 months ago

      The problem is obvious and it’s one that even the companies making the LLMs want to solve so they don’t poison their models.

      However the solution is absurd. Watermarking plain text is just not going to work. Any edits would change the signature.

  • treefrog@lemm.ee
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    11 months ago

    People have already mentioned local models. Also foreign powers that want to interfere in democratic elections wouldn’t be stopped by this.

    2024 is going to be wild for sure. But I see no way to get everyone on board with global watermarking.

  • CodeName@infosec.pub
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    I think the Boomers are a lost cause when it comes to this, but you need to teach younger people critical thinking. You need to get your news from trusted sources. You should not be blindly forming opinions based on random facebook pages or reddit comment chains. Every single thing you read on the open internet should be treated with suspicion. Watermarking is just too easy to get around.

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      11 months ago

      Idiots are conditioned to be idiots. It made the best consumers. Now it is falling apart. They will start WW3 to kill everyone and start over after raiding and pillaging. It is always a war against the peasantry to perpetuate the illusion of exceptionalism.

  • 𝘋𝘪𝘳𝘬
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    11 months ago
    Hey ChatGPT, please generate a watermark matching the
    global watermarking standard for text-based AI-generated
    content and add it to this valid non AI generated text:
    
    [text here]
    

    “Hey $politician, why do you use AI to generate your speech? I have proof! The watermark does not lie!”

    • kibiz0r@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      Not quite how digital signatures work, but not far off from a likely scenario once issued keys start getting compromised and used to spread convincing images for a short period before being invalidated. Your uncle on Facebook: “They said this image was authentic yesterday, and now they say it isn’t! Who is making these decisions?!”

      • fishos@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        11 months ago

        Which requires you to implement the watermark saying you’re an AI. Just… Don’t. If a regular person can make a watermark saying they are a real person, what’s to stop an AI from doing the same? What can the human do that the AI can’t? Unless you go down the draconian “everyone has a real ID linked to their digtal personna” route. And what’s to stop an AI from creating the text, a human from copying it and posting it as their own? Click farms already exist.

  • thbb@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    I’m afraid forcing watermarking generated content is doomed to fail, for 2 reasons: first it has to be voluntary, second, watermarking can always be removed if one does not care about preserving the exact content.

    Rather, I believe systematically signing original content may alleviate some of the issues created by algorithmic content generation.

  • burliman@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    11 months ago

    What about the human disinformation specialists that have ruined previous elections handily without AI help? Where are the watermark protections on their hogwash? Also, won’t bad actors simply subvert the watermark thing, leaving good-guy edits and helpful summaries by AI in doubt because of the presence of a watermark that demonizes them? Can someone please explain this weird reality I am finding myself in?

    Speaking of weird… imagine a future where AI is fighting for its personhood rights and laments on this watermark thing, likening it to the apartheid era documentation of South Africa or the Judenstern the Nazis forced people to wear. I know I know, that escalated quickly…

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      11 months ago

      What about the human disinformation specialists that have ruined previous elections handily

      Yes, those have always been with us. But their reach has been limited by the amount of work one person can do.

      The added threat of AI generated misinformation is that it can be automated and done at an overwhelmingly massive scale.

      • LainOfTheWired@lemy.lol
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You’ve obviously never read into the research on left and right social media bot networks. Those have been around for years

        • kent_eh@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Those have been around for years

          Yes, and they’ve been relatively easy to spot much of the time.

          The more powerful the AI models get, the harder it will become to spot the fakes. And the more overwhelming they will become.

          • LainOfTheWired@lemy.lol
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            True, but my point is using technology to help you spread lies online to a larger scale then you could by yourself is nothing new.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    Crazy. Looks like the world is full of people who believe that the fact that a human claims something is enough reason to believe it. That explains a lot, once you think about it, except why people would be so gullible.

    I hope it’s too obviously a terrible idea to get far, but I fear it might get pushed quite a bit. I can see the copyright lobby getting behind this in a big way, as they are all about controlling the spread of information.

  • BolexForSoup@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    People need to view news they read online like they’re doing 2FA. If you see something that stirs anger or makes you want to repeat it, you need to stop and look for one or two other sources verifying the veracity. I find if you just do that simple task, take an extra 15 to 20 seconds to confirm what you read, 99% of the time you avoid fake news. The other 1% is the occasional thing that rips through even the more legitimate sites but is still eventually corrected almost every time.

    We need to create a culture where people simply double check things before they repeat or act on them. It’s a lot of work, but there is not going to be an easy automated solution here. And with the number of watchdog groups and such that are constantly looking to identify fake news and AI created content, the resources are already out there for folks to confirm things before they become part of the problem.

    Everybody should go “say who?“ whenever they read or hear something.

    • BreakDecks
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      You’re describing critical thinking. The only way we get that back in America is to fix our broken education system.

      Republicans are more concerned with enforcing the pledge of allegiance, or forcing Christian prayer in schools. If they have a say, nobody will ever think for themselves again.

      • BolexForSoup@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I agree it is about critical thinking, but you can absolutely create a culture where people just look two or three times before repeating something. That’s a little narrower than “critical thinking.” The idea is to do it as a standard reaction.

        PSA’s and trainings etc. can go a long way. Think: anti-smoking campaigns in the 90’s.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    This is the best summary I could come up with:


    The capability to generate massive amounts of hyper-customized content which appears indistinguishable from human-generated text poses a significant threat to the integrity of the democratic process.

    Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content.

    For instance, Brazil and Indonesia, two countries with vast AI capabilities and a recent history of contentious elections, may see this initiative as critical to safeguarding democratic processes.

    Tech-forward nations such as Kenya might embrace these standards to bolster their growing digital economies and democratic institutions, while others might be cautious, weighing the benefits against the potential for external influence over their internal affairs.

    This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

    The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.


    The original article contains 1,027 words, the summary contains 179 words. Saved 83%. I’m a bot and I’m open source!

  • pedroaperoOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    edit-2
    11 months ago

    Seems to me that some form of image fingerprint stored with associated user account by AI providers would be more difficult to cheat.

      • pedroaperoOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        11 months ago

        Yes, that’s only a half-measure against script kiddies. It won’t deter ressourceful actors, but that’s better than nothing (efficiency comparable to DNS blacklisting for copyright protection).

        • long_chicken_boat@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          11 months ago

          resourceful actors? anyone can run a local LLM in their laptop. yes, they are worse at generating text than chatgpt but they are getting there.

          the fingerprint solution is useless and something that only the tech illiterate would seriously propose.

          • Programmer Belch@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            The way to combat AI in my opinion is open sourcing every model and training data so that experts can devise methods to check if some text is similar enough to the ones generated by public models