A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • fuzzzerd@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    This is what I was wondering about as I read the article. At what point does the post processing on the device become too much?

      • fuzzzerd@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        What would you classify google or apple portrait mode as? It’s definitely doing something. We can probably agree, at this point it’s still a reasonably enhanced version of what was really there, but maybe a Snapchat filter that turns you into a dog is obviously too much. The question is where in that spectrum is the AI or algorithm too much?

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          It varies, there’s definitely generative pieces involved but they try to not make it blatant

          If we’re talking evidence in court then it’s practically speaking more important if the photographer themselves can testify about how accurate they think it is and how well it corresponds to what they saw. Any significantly AI edited photo effectively becomes as strong evidence as a diary entry written by a person on the scene, it backs up their testimony to a certain degree by checking for the witness’ consistency over time instead of trusting it directly. The photo can lie just as much as the diary entry can, so it’s a test for credibility instead.

          If you use face swap then those photos are likely nearly unusable. Editing for colors and contrast, etc, still usable. Upscaling depends entirely on what the testimony is about. Identifying a person that’s just a pixelated blob? Nope, won’t do. Same with verifying what a scene looked like, such as identifying very pixelated objects, not OK. But upscaling a clear photo which you just wanted to be larger, where the photographer can attest to who the subject is? Still usable.