OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
So anyone who creates something remotely similar to something online is plagiarizing, got it.
Folks, that’s how we all do things - we read stuff, we observe conversations, we look at art, we listen to music, and what we create is a synthesis of our experiences.
Yes, it is possible for AI to plagiarize, but that needs to be evaluated on a case by case basis, just as it is for humans.
The lawsuit isn’t about plagiarism; it’s about using content without obtaining permission.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
Is it stealing to learn how to draw by referencing other artists online? That’s how these training algorithms work.
I agree that we need to keep this technology from widening the wealth gap, but these lawsuits seem to fundamentally misunderstand how training an AI model works.
algorithm scraping the web for data doesnt play by the same rules as humans mate
AI is not human. It doesn’t learn like a human. It mathematically uses what it’s seen before to statistically find what comes next.
AI isn’t learning, it’s just regurgitating the content it was fed in different ways
But is the output original? That’s the real question here. If humans are allowed to learn from information publicly available, why can’t AI?
No, it isn’t original. Output of AI is just reorganized content that it already has seen.
AI doesn’t learn, it doesn’t create derivative works. It’s nothing more than reshuffling what it’s already seen, to the point that it will frequently use phrases pulled directly from training data.
You are saying that it isn’t original content because AI can’t be original. I’m saying if the content isn’t distinguishable from original content, and can’t be directly traced to the source, in what way is it not original?
Because it’s still not creating anything. AI can’t create, it just reorganizes.
I think you hear a lot of college students say the same thing about their original work.
What I need to see is output from an AI, and the original content side by side and say “yeah, the AI ripped this off”. If you can’t do that, then the AI is effectively emulating human learning.
I think you hear a lot of college students say the same thing about their original work.
What I need to see is output from an AI, and the original content side by side and say “yeah, the AI ripped this off”. If you can’t do that, then the AI is effectively emulating human learning.
No it isn’t
AI is math. That’s it. This over humanization is crazy scary that people can’t see the difference. It does not learn like a human.
https://www.vice.com/en/article/m7gznn/ai-spits-out-exact-copies-of-training-images-real-people-logos-researchers-find
https://techcrunch.com/2022/12/13/image-generating-ai-can-copy-and-paste-from-training-data-raising-ip-concerns/amp/
https://www.technologyreview.com/2023/02/03/1067786/ai-models-spit-out-photos-of-real-people-and-copyrighted-images/amp/
It’s a well established problem. Tech companies have explicitly told employees to not use these services on company hardware or servers. The data is not abstracted from the user and it’s been proven to output data that’s been inputted.