In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
Personally I’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single company who’s built a thing on the internet.
I just take comfort in the fact that my art will never be good enough for a generative Ai to steal.
If it’s on any major platform, these companies will probably still use it since I doubt at that point if they were allowed to scrape the whole internet they’d have any human looking over the art used.
It’ll just be thrown in with everything else similar to how I always seem to find paper towels in the dryer after doing laundry.
Then I take comfort in the fact it might serve to sabotage whatever it generates.
“Bad” art is still useful in training these models because it can be illustrative of what not to do. When prompting image generators it’s common to include “negative prompts” along with your regular one, telling the AI what sorts of things it should avoid putting in the output image. If I stuck “by Roundcat” into the negative prompts it would try to do things other than the things you did.
Voluntary obscurity is always an option, I suppose.
I think the topic is more complex than that.
Otherwise you could say you’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single artist who use internet for references and inspiration.
There’s not only the argument “but companies do so for profit” because many artist do the same, maybe they are designers, illustrators or other and you’ll work will give them ideas for their commissions
It’s true that restricting what AI can train on inhibits societal progress, but it’s consistent with current copyright laws because an AI is not a human and can’t be treated as anything more than an algorithm. What we’re learning here is that AI is bringing to light a problem in intellectual property law that’s plagued us for a long time: intellectual property being overly protected is harmful to society as a whole. I wouldn’t be opposed to AI training on data on the internet if people got the same treatment: let people reuse each other’s melodies, don’t protect likenesses so strictly, and for the love of humanity, no more pharmaceutical patents!
I think this is an interesting example, because it’s already like this. Songs reusing other sampled songs are released all the time, and it’s all perfectly legal. Only making a copy is illegal. No one can sue you if you create a character that resembles mickey mouse, but you can’t use mickey mouse.
And pharmaceutical patents serves the same scope, they encourage the company to release publicly papers, data and synthesis methods so that other people can learn and research can move faster.
And the whole point of this is exactly regulating AI like people, no one will come after you because you’ve read something and now you have an opinion about it, no body will get angry if you’ve saw an Instagram post and now you have some ideas for your art.
Of course the distinction between likeness and copy is not that defined, but that’s part of the whole debacle
Look at this.
Having heard both songs, they really aren’t all that similar.
Pharmaceutical patents are insanely harmful to the average consumer, at least in the US. Only the rich and powerful or those willing to go deeply into debt are able to benefit from all of that extra research.
That’s more of a US problem than it is a pharmaceutical patents problem.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
But that rarity is entirely manufactured.
That’s more of a US problem than it is a pharmaceutical patents problem.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
No idea if my comment went through the first time, so trying again: “But that rarity is entirely manufactured.”
It’s just a single example, there are endless songs which are samples of samples of samples… Once in a while YouTube content id will have some problems as it’s not perfect. It doesn’t mean the system is fundamentally flawed. Like saying every car on the planet is cursed because once you got a flat tyre.
Pay attention because the alternative to patents is not a “free for all” approach , it’s industrial secrecy. As research is still very much expensive for entities to carry out.
Set aside than, no, extra research benefits everyone in the society as new cures for diseases are discovered faster and medicine evolve organically. Patents were the compromise to ensure companies could monetize their research while sharing their knowledge, are there other possible equilibrium? Sure, but we still have to remember we live in the real world, you can’t have a cake and eat it
I wasn’t aware that it was just YT’s system that had messed up and not the legal system. Crazy that one company has that much power.
If industrial secrecy is a problem, make that illegal too. We have a right to know where our products come from, anyway. If pharmaceuticals need to benefit in order to do research, instead of patents outlawing reproduction of their products entirely, just make other companies give the original researcher a cut of the profit while the patent lasts. A 10-20% royalty should be more than enough to incentivise research while still preventing price-fixing and monopolies.
Oh the legal system is very much messed up, YouTube tried to put a bandage in it. You have to consider that usually you would need a full personalized legal contract for each piece of copyrighted material you use. Content id tries to automate the process, but it’s not perfect.
Which is what happens with patents today. The company holding the patent rarely also physical produces the drug, they usually have “manufacturing agreements” expecially in geographic far markets; where they let a second company make the drug with the company holding the patent on it and they are free to sell it in exchange for a percentage of the label price.
That’s also what happened with vaccines and many other medications, it’s like the standard procedure lol
And of course, the same principle must apply to the resulting AI models themselves.
We need to actively start sabotaging the data sources these LLMs are based on. Make AI worthless.
Your comment right here provides useful training data for LLMs that might use Fediverse data as part of their training set. How would you propose “sabotaging” it?