- cross-posted to:
- nyt_gift_articles@sopuli.xyz
- cross-posted to:
- nyt_gift_articles@sopuli.xyz
Inbreeding
What are you doing step-AI?
Are you serious? Right in front of my local SLM?
So now LLM makers actually have to sanitize their datasets? The horror…
I don’t think that’s tractable.
Oh no, it’s very difficult, especially on the scale of LLMs.
That said, we others (those of us who have any amount of respect towards ourselves, our craft, and our fellow human) have been sourcing our data carefully since way before NNs, such as asking the relevant authority for it (ex. asking the post house for images of handwritten destinations).
Is this slow and cumbersome? Oh yes. But it delays the need for over-restrictive laws, just like with RC crafts before drones. And by extension, it allows those who could not source the material they needed through conventional means, or those small new startups with no idea what they were doing, to skim the gray border and still get a small and hopefully usable dataset.
And now, someone had the grand idea to not only scour and scavenge the whole internet with no abandon, but also boast about it. So now everyone gets punished.
At last: don’t get me wrong, laws are good (duh), but less restrictive or incomplete laws can be nice as long as everyone respects each other. I’m excited to see what the future brings in this regard, but I hate the idea that those who facilitated this change likely are the only ones to go free.
that first L stands for large. sanitizing something of this size is not hard, it’s functionally impossible.
You don’t have to sanitize the weights, you have to sanitize the data you use to get the weights. Two very different things, and while I agree that sanitizing a LLM after training is close to impossible, sanitizing the data you give it is much, much easier.
Imo this is not a bad thing.
All the big LLM players are staunchly against regulation; this is one of the outcomes of that. So, by all means, please continue building an ouroboros of nonsense. It’ll only make the regulations that eventually get applied to ML stricter and more incisive.
They call this scenario the Habsburg Singularity
How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: https://huggingface.co/docs/transformers/main/model_doc/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.
As the number of articles about this exact subject increases, so does the likelihood of AI only being able to write about this very subject.
Hahahahaha
AI doing to job of poisoning itself
Good. Let the monster eat itself.
This reminds me of the low-background steel problem: https://en.m.wikipedia.org/wiki/Low-background_steel
idk how to get a link to other communities but (Lemmy) r/wikipedia would like this
You link to communities like this: !wikipedia@lemmy.world
!tf2shitposterclub@lemmy.world
oo it worked! ty!
Interesting read, thanks for sharing.
AI centipede. Fucking fantastic.
The best analogy I can think of:
Imagine you speak English, and your dropped off in the middle of the Siberian forest. No internet, old days. Nobody around you knows English. Nobody you can talk to knows English. English for all intents purposes only exists in your head.
How long do you think you could still speak English correctly? 10 years? 20 years? 30 years? Will your children be able to speak English? What about your grandchildren? At what point will your island of English diverge sufficiently from your home English that they’re unintelligible to each other.
I think this is the training problem in a nutshell.
So kinda like the human centipede, but with LLMs? The LLMillipede? The AI Centipede? The Enshittipede?
Except it just goes in a circle.
))<>((
All according to keikaku.
[TL note: keikaku means plan]
No don’t listen to them!
Keikaku means cake! (Muffin to be precise, because we got the muffin button!)
It’s the AI analogue of confirmation bias.
Looks like i need some glasses
I always thought this is why the Facebooks and Googles of the world are hoovering up the data now