If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • andrew_bidlaw@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    28 days ago

    It’s an interesting experiment, but why would we trust everything that Leica supposedly verified? The same shit with digital signatures and blockchain stuff. We are at the gates of the world where we have zero trust by default and would only intentionally outsource verification to third parties we trust, because penalties for mistakes are growing each day.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      28 days ago

      I don’t think we should inherently. I’ve thought about the idea of digitally signed photos and it seems sound unless someone is quite clever with electronics. I’m guessing there’s some embedded key on the camera that is hard but maybe not impossible to access. If people can hack Teslas for “full autopilot” or run Doom on an ATM machine I’m not confident that this kind of encryption will never be cracked. However, I would hope an expert witness would also examine the camera that supposedly took the picture. I would think it to be impossible for someone to acquire the key without a 3rd party detecting the intrusion.

      • andrew_bidlaw@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        Today we have EXIFs and it’s better to wipe them all of these for privacy reasons. Because every picture you take otherwise contains a lot of your data like geoloc, model, exposuer, etc. That’s the angle they are yet to tackle - because most of these things are also leave us vulnerable.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        28 days ago

        They make Hardware Security Modules (HSMs) that are very difficult to crack, to the point that it is unbreakable at our current technology level. With a strong HSM, a high-bit per-device certificate signed by the company’s private key gives you authenticity and validation until the root key or HSM are broken, which is probably good enough for today while we try to figure out something better IMO.

    • LesserAbe@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      27 days ago

      Well as I said, I think there’s a collection of things we already use for judging what’s true, this would just be one more tool.

      A cryptographic signature (in the original sense, not just the Bitcoin sense) means that only someone who possesses a certain digital key is able to sign something. In the case of a digitally signed photo, it verifies “hey I, key holder, am signing this file”. And if the file is edited, the signed document won’t match the tampered version.

      Is it possible someone could hack and steal such a key? Yes. We see this with certificates for websites, where some bad actor is able to impersonate a trusted website. (And of course when NFT holders get their apes stolen)

      But if something like that happened it’s a cause for investigation, and it leaves a trail which authorities could look into. Not perfect, but right now there’s not even a starting point for “did this image come from somewhere real?”