Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous
Succint
AI is a vast field. LLMs and neural networks are a small part of it.
LLMs are very expensive to run and a lot more complex than the markov chains often used for predictive text.
Predictive text just chooses a likely word based on what’s typed. This may be as simple as looking for words that start with what you’ve typed.
LLMs vectorise words and understand the complex relationship between vectors using many data points. So it would spot the word “two” and realise that plurals are used with it.
Predictive text also can vectorize words, but the number of vectors per word are much, much simpler.
Now guess how it feels to type German with predictive text. Most of our words can have more than a dozen different word endings depending on time and how the word is used. And that’s not taking into account that we use compound words, which word prediction pretty much cannot predict and often doesn’t even know. So spell check will mark a legal compound word as misspelled, because it doesn’t understand the concept of compound words and doesn’t know this specific word combination.
To show what I mean, the term “Danube steam boat captain’s hat” becomes “DonauDampfSchiffKapitänsMütze” (I added capital letters which shouldn’t be there to show where the next word in the compound word begins).
While this is an extreme example, it’s pretty common for compound words to consist of 4-5 words.
And for some reason, some cases seem to be missing completely on my Android default keyboard. “untersuchst”, just like a bunch of second person cases for slightly unusual words is non existent.
Yeah, noticed that too. This is really annoying.
My favourite: ‘geröntgt’ which is the second participle of ‘röntgen’ to x-ray someone. Never heard it pronounced correctly by a native speaker.
Dutch also has the issue with the compound words. Autocorrect will often put a space in there, which is grammatically incorrect (and ugly). I feel like it’s at a point now where the incorrect space usage has become mainstream and might change the language rules. Oh well.
LLMs are orders of magnitude more sophisticated and expensive to run. But don’t worry, I’m sure not so far in the future we will see smaller LLMs being run on device to be used as autocorrect.
It would have to be pretty specific and small to work on a phone and I think a side effect would be everyone’s conversations start to sound a lot more homogeneous.
you’re not wrong. Google just announced Gemini Nano that will run directly on the Pixel 8. Of course, it’s the first of it’s kind and will probably be slow and it’s not used as autocorrect yet. But just give it one year or two and it will probably be more common.
Even give years ago, Google had a keyboard that skimmed your emails and texts to start a bank of words you use to supplement it’s dictionary for autocorrect. Like if you are a chemist and send an email that includes the word “tetrahydrafuran” every couple month, it would be nice for your phone keyboard to just have it in the dictionary.
SwiftKey does that if you give it access to your emails.
Can we have Scottish ones that know what a bawbag is, and when to put an “e” on the end of “shit”?
Thanks!
Think of it from the LLM’s perspective - in the general pool you have common English, you have less common variations such as this, and then you have whatever the heck people like Kid Rock are doing…
Bawitdaba, da bang, da dang diggy diggy
Diggy, said the boogie, said up jump the boogie
LLMs like chatgpt take a wild amount of resources to run.
If you want something as smart as gpt3 and you want it to run at typing speeds, you’ll need a gaming PC running it.
People just recently managed to run gpt3 strength models at all on ordinary laptop hardware (slowly).
There is currently no way to run something gpt4 strength on ordinary consumer hardware (I’m just guessing but I think it takes a few hundred gb of VRAM to run)
Because they’re using different tech. That’s like asking why do phone calls sound bad compared to voip calls. They’re just using different tech.
Lawnmowers can’t keep up with Ferraris either, despite both being vehicles.
edit for wording
You’re in the No Stupid Questions community. Think about rule 7 in particular.
Ah, thank you. I’ll edit.
@Mr_Blott@lemmy.world autocorrect doesn’t use LLMs, at least not yet generally. I suspect it is some kind of markov model or maybe?
What the duck are you talking about?
You’re comparing apples to oranges.
If humans are just brains why are we smarter than dogs who also have brains?
It’s “no stupid questions”, so “no cunty answers” thanks
Well fuck you too pal, I thought it was a good analogy.
It wasn’t
It was, but the wrong community