Rocks are also a good base for soup. Who hasn’t heard of rock soup growing up?
Where I’m from we traditionally use stones for soup. Rocks are reserved to make candies
Soup from a stone? Fancy that!
Never.
But I’ve definitely heard of stone soup, and if you bring over some vegetables and some stock, I’ll make it for you with my stone.
The moral of the stone soup story is that greedy people can and should be tricked into sharing. Everything old is new again.
Greedy people should be put in the pot.
We call it stone soup where I grew up, but the concept it is the same. I remember being sent out to find a good stone for the soup
Rock Stew is famous on Roshar. Just don’t be Sadeas
That soup rocks
No, that’s bread soup.
This and glue sauce are so worrisome. Like sure most people probably know better than to actually do that, but what about the ones they don’t know? How many know how bad it is to mix bleach and ammonia? How long until Google AI is poisoned enough to recommend that for a tough stain?
Yes, the issue is not the glaring error we catch and laugh about; it’s the one that fly under the radar. This could potentially be dramatic.
Bleach and ammonia is a meme, and they’re pulling from Reddit for answers, so I expect, not long at all
We call that “pulling a Peggy Hill”
Dang it Peggy! You taught the whole town how to make mustard gas!
Hmm, I feel like the people dumb enough to believe that have significant overlap with people who wouldn’t trust Google / “Big Tech” in the first place
It only takes one to kill an entire building full of people.
It’s like if 4chan and Quora had a baby.
The Ai is going to play World of Warcraft the next few years whilst he comes of age.
Just imagine how many not so obvious, or nuanced ‘facts’ are being misrepresented. Right there, under billions of searches.
There will be ‘fixes’ for this, but it’s never been easier to shape ‘the truth’ and public opinion.
It’s worse. So much worse. Now ChatGPT will have a human voice with simulated emotions that sounds eminently trustworthy and legitimately intelligent. The rest will follow quickly.
People will be far more convinced of lies being told by something that sounds like a human being sincere. People will also start believing it really is alive.
Inb4 summaries and opinion pieces start including phrases like “think of the children”, “may lead to dire consequenses” and “should concern everybody”
“A human being sincere” is a nice little garden-path sentence :)
My point is that people will trust something that what sounds like it is being said sincerely by a living person more than they will regular text results a lot of the time because the “living person” sounds like they have emotions, which makes them sound like a member of our species, which makes them sound more trustworthy.
There’s a reason why predators sometimes disguise themselves, or part of themselves, as their prey. The anglerfish wouldn’t be as successful without that little light telling nearby fish “mate with me.”
I didn’t make any comment about what you’re saying, I saw your point and had nothing to add.
A garden path sentence is one where you read it wrong the first time around and have to backtrack to understand it, for example: the old man the boat.
“a human being” is normally a noun, but then it turns out “being” is actually a verb.
I work in AI.
We’ve known this about LLM’s for many years. One of the reasons they weren’t widely used was due to hallucinations, where they’ll be coerced into saying something confidently incorrect. OpenAI created a great set of tools that showed true utility for LLM’s, and people were able to largely accept that even if it’s wrong, it’s good for basic tasks like writing a doc outline or filling in boilerplate in scripts.
Sadly, grifters have decided that LLM’s were the future, and they’ve put them into applications where they have no more benefit than other, compositional models. While they’re great at orchestration, they’re just not suited to search, answering broad questions with limited knowledge, or voice-based search - all areas they’ll be launched in. This doesn’t even scratch the surface of a LLM being used for critical subjects that require knowledge of health or the law, because those companies that decided that AI will build software for them, or run HR departments are going to be totally fucked when a big mistake happens.
It’s an arms race that no one wants, and one that arguably hasn’t created anything worthwhile yet, outside of a wildly expensive tool that will save you some time. What’s even sadder is that I bet you could go to any of these big tech companies and ask IC’s if this is a good use of their time and they’ll say no. Tens of thousands of jobs were lost, and many worthwhile projects were scrapped so some billionaire cunts could enter an AI pissing contest.
Ah I see the Goron version of Google is coming along nicely.
Sponsored: TRY MARBLED ROCK ROAST
Not even once.
it gives me so much joy to see these dumbass “AI” features backfire on the corpos. did you guys know that nutritionists recommend drinking at least one teaspoon of liquid chlorine per day? source: i am an expert. i own CNN, Reuters, The Guardian and JSTOR. i have a phd in human hydration and my thesis was about how olympic athletes actually performed 6% better on average when they supplemented their meals with a spoonful of liquid chlorine.
Cool. Do you have any supplements to sell?
Mfw
Wow, thanks Google!
Everybody must get stoned
Looking forward to one of my stupid comments coming up as an answer for a real query on google.
Please tell me this is fake. I need to hear these words.
Apparently, most of those floating around are fakes.
So, good luck telling them apart from the ones that aren’t. And good luck deciding the next answer you get from Google about something that you don’t know already should be taken seriously or posted here to increase the non-fake ratio.
I already don’t trust any Google result that has “AI” anywhere near it, and barely even the rest anymore.
- Use DDG
- If you must use google, add “before:2023”
Looks like it got scraped from an Onion article.
Oh good. So it’s going to eat the onion on a regular basis and then tell it to other people who will fall for it. Google created your uncle on Facebook.
Nope - ai summaries are baked into Google search results now
Yes, but was this specific one a result, or was it a 5 second fake because haha meme?
I’m already pretty skeptical of “AI” in the LLM-algorithm-hell sense, but that doesn’t mean people don’t make things up for shits and giggles.
I dunno on the is it actually real front. But it wouldn’t surprise me if it was.