doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
a GPT can produce things it’s never seen.
It can produce a galaxy made out of dog food; doesn’t mean it was trained on pictures of galaxies made out of dog food.