Microsoft, Google, Amazon and tech peers sign pact to combat election-related misinformation::A group of 20 leading tech companies on Friday announced a joint commitment to combat AI misinformation heading into the 2024 elections.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    ·
    edit-2
    9 months ago

    This is just to try to avoid governments adding regulation around this. They want to say look the industry is self policing we don’t need regulation, whilst they do nothing

    • lemmy_user_838586
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      9 months ago

      Very true, and Its also kinda too late. (at least in the USA) we have an election coming up in 9 months that has been raging since the last election with misinformation surrounding it. Kinda too late to pull all those Facebook memes and fake articles and go “just kidding! That was all lies!” Everyone who’s been spoon fed that crap has already made up their mind on who they’re voting for, you’re not gonna change it last minute, damage had already been done.

    • holycrap@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      9 months ago

      Even if they weren’t Facebook and Xitter are noticeably missing.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        Facebook and Xitter are noticeably missing.

        In the headline yeah. Spend 15 seconds reading the article and you find them mentioned aswell.

  • RedFox@infosec.pub
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 months ago

    There’s so many liers everywhere, how do you even determine misinformation anymore?

    How do fact check things and hide it if it’s BS?

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Concept is, end user reports misinformation, fact checkers at the company take the reported information and check it against the database. If it’s listed as false in the database It gets squelched and the AI gets a little tuning to make sure that it stays squelched. If it’s in the database and it’s true the user is informed that it’s not false information. If it’s not in the database, That’s when it’s dicey. Does the team of people moderating the posts make the call, does it go to another team to be classified. At what point do you block it? If one details wrong if two details are wrong if half the post is wrong. Do you squelch mostly true? Or do we just get disclaimers everywhere for 6 months.

      • RedFox@infosec.pub
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        I’m mostly puzzled by how this would be carried out when the vast majority of information seems to be discretionary, interpreted, perceived, opinion. Like the statement I just made ;)

        Facts either are or aren’t.

        Misinformation is vary more challenging because it’s usually derived from an event that was a fact, but the interpretation, analysis, significance, etc is based on the person’s bias.

    • bigkahuna1986
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      I think he actually uses his lizard tongue to kind of jump around.