Misinformation in the 2024 election will be rampant due to accessible AI tools, says Eric Schmidt. Social media’s failure to protect against false AI-generated content and the reduction of trust and safety groups are concerns. Schmidt suggests marking content and holding users accountable for law violations.

  • fiasco@possumpat.io
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    The only thing deep learning has done is make forgery more accessible. But Stalin was airbrushing unpersons out of photos sixty years ago, so in principle this is nothing new.

    When it comes to politics, there’s already enough money floating around that you don’t need deep learning to clog the internet with shit. So personally I’m not expecting anything different.

    • DoucheAsaurus@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’m expecting the exact same thing as every year, shitty candidates spewing lies and fake promises at each other until the voters get to choose between the two options that corporate America has decided we can vote on.

        • DoucheAsaurus@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          That is absolutely not what I’m saying. The whole thing is a sham because of corporate interests, lobbyists, super PACs, the electoral college, super delegates, corporate media bias, etc. What I’m saying is that democracy was stolen from us.

          • mrnotoriousman@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Well can’t disagree that it’s creeping damn close to corporatocracy here. Sucks having no actual left wing representation outside of like a handful of congresspeople.

    • hglman
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The volume of humanesque text that can be produced by AI is orders of magnitude greater. It will be diffrent this time and it will be really annoying.

      • fiasco@possumpat.io
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I understand that, but the amount of money that gets fed into political campaigns already generates staggering amounts of spurious text. It’s hard to remember what happened the day before yesterday, but “fake news” originally meant sites that were set up to vaguely look like news sites, all for the purpose of pushing one or two entirely made-up propaganda pieces. Yes, deep learning can partly automate this, but automation isn’t necessary in this case.

        There comes a point of diminishing returns with spurious text, and I feel like we’re already past that point.