I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?
They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.
They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.
At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.
Now it’s degrading even faster as AI scrapes from AI in a technological circle jerk.
Yes, that’s what they said. I’m starting to think you came here with a particular agenda to push, and I don’t think that’s very polite.
I found a non paywalled article where scientists from Oxford University state that feeding AI synthetic data from other AI models could lead to a collapse.
https://www.zdnet.com/article/beware-ai-model-collapse-how-training-on-synthetic-data-pollutes-the-next-generation/
Ooh an article, thank you
She might be full of crap. I don’t know. You would probably understand it better than I do.
I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.
For a while Google said they would revolutionize search with artificial intelligence. That hasn’t been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.
Yeah, it’s much better at “creative” tasks (generation) than it is at providing accurate data. In general it will always be better at tasks that are “fuzzy”, that is, they don’t have a strict scale of success/failure, but are up to interpretation. They will also be better at tasks where the overall output matters more than the precise details. Generating images, text, etc. is a good fit.
The person who said AI is neither artificial nor intelligent was Kate Crawford. Every source I try to find is paywalled.
Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.
I know what model collapse is, it’s a fairly well-documented problem that we’re starting to run into. You’re not wrong, it’s just that the person you replied to was agreeing about this.
Nice! I’m glad you were able to find something useful to use it for.