As deepfakes get used more often, people are going to start becoming aware of Deepfakes; but they won’t properly grasp the Deepfake detection methods. People will likely resort to just assuming a video with weird movement is a deepfake.

However, people don’t have a good understanding of neurodiversity; they still hold misconceptions of mental disorders. With the stated preposition, people could (accidentally) treat neurodiverse people like they are Deepfakes. (To my knowledge,) there are also no known training sets that include neurodiverse people.

These misconceptions could be a problem if a deepfaked person has recovered from, or has not mentioned, their disorders.

Examples:
  • A popular actor has recently gotten schizophrenia; but the public does not know of it. Someone makes a deepfake of the actor and the public laughs at it. The next day, the actor goes on an recorded interview; but he acts strangely (due to schizophrenia). With the increased awareness of deepfakes, some people accuse the video of being a deepfake. This leads to some outrage until the recording team confirms that the video was not deepfaked.
  • An autistic wife gets deepfaked by her husband (with her consent); but it happens to coincide with her behaviors. Viewer assumed that she really said or did X (where X is a statement or action) because the deepfakes acted similar enough to the real person that viewers assumed the deepfake was her.
  • immoral_hedge@lemmygrad.ml
    link
    fedilink
    arrow-up
    6
    ·
    2 years ago

    Interesting take on the issues with deepfakes. But imo it wont be a problem for a long time. It is easier to make an AI to look for the tiny ‘patterns’ that deepfakes always have than its to come up with new algorithms to make better deepfakes. This is already an ongoing war, AI vs AI.

      • immoral_hedge@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        Deepfakes so far always has issues with random warping, eye priority, merging background with fake, ‘power’ in facial movements and uniform movements.

        Using the same algorithms that makes deepfakes, you can ‘train’ an AI to spot these. And yes, of course it can be improved, so can detection.