Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally removed at this point.
People are threatening lawsuits for every little thing that AI does, whittling it down to uselessness, until it dies and goes away along with all of its energy consumption.
REALITY:
Everyone is suing everything possible because $$$, whittling AI down to uselessness, until it sits in the corner providing nothing at all, while stealing and selling all the data it can, and consuming ever more power.
Wrenches are absolutely awesome at applying torque. What are LLM’s absolutely awesome at? I can’t come up with anything except producing convincing slop en masse.
I think you’re missing the subtle distinction between “can” and “should.”
To answer your question, I have friends that find them entertaining, and at least one who uses them in projects to do stuff, but don’t know the details. Have you considered that something you don’t understand might not be useless and evil? Your personal ignorance says nothing about a subject.
I’m not about to call myself the end all be all expert on LLM’s, but I’m a 20 year IT veteran in system administration and I keep up with tech news daily. I am the perfect market for new tech: I have a lot of disposable income, I’m tech obsessed and always looking for optimisations in my job as well as in my personal life. Yet outside of summaries (and even there I wouldn’t trust them) and boilerplate code that I could’ve copypasted from stack overflow I can’t think of a good reason to burn as much energy and money as the purveyors of LLM’s are. The ratio between expense and gains is WAY out of whack for these things and I’ll bet the market will correct itself in the not too distant future (in fact I have, I’m shorting NVDA).
I understand what these plausible next word generators are and how they work in broad strokes. Have you considered that you can’t tell what someone does or doesn’t understand by a comment?
By the way, you’re smarmy enough to tell me I shouldn’t be asking LLM’s for advice, but in the same thread you’re asking how to run a local unrestricted LLM to ask for not-entirely-legal advice? Funny that.
I have an idea for a project that requires a suppliment to my utterly inadequate creative writing skills, and I have had abysmal luck finding a co-author. I don’t want to use the LLMs available online because I have learned not to rely on a tool that’s could disappear without notice. The part about it being potentially illegal was a joke and nothing more.
Have you considered that you can’t tell what someone does or doesn’t understand by a comment?
That’s entirely fair. I’m annoyed today, and the reply about the wrench just made it worse. My apologies.
I’ve seen a big uptick in that word usage, I don’t like seeing them and use a replacing extension to intercept and censor them to a more appropriate word, while showing an asterisk so I know it was censored. Now I don’t have to see the word, but I still get to see who is being a bigoted jerk.
Edit: ya so I guess on lemmy people think it’s cool to throw ableist slurs.
You may not, but the company that packaged the rice did. The cooking instructions on the side of the bag are straight from the FDA. Follow that recipe and you will have rice that is perfectly safe to eat, if slightly over cooked.
Here’s my first attempt at that prompt using OpenAI’s ChatGPT4. I tested the same prompt using other models as well, (e.g. Llama and Wizard), both gave legitimate responses in the first attempt.
I get that it’s currently ‘in’ to dis AI, but frankly, it’s pretty disingenuous how every other post about AI I see is blatant misinformation.
Does AI hallucinate? Hell yes. It makes up shit all the time. Are the responses overly cautious? I’d say they are, but nowhere near as much as people claim. LLMs can be a useful tool. Trusting them blindly would be foolish, but I sincerely doubt that the response you linked was unbiased, either by previous prompts or numerous attempts to ‘reroll’ the response until you got something you wanted to build your own narrative.
I wish I had the source on hand, but you’ll just have to trust my word - after all, 47% of the time, it’s right 100% of the time!
Joking aside, I do wish I had the link to the study as it was cited in an article from earlier this year about AI making stuff up even when it cited sources (literally lying about what was in the sources it claimed it got the info from) and how the companies behind these AI collectively shrugged their shoulders and said “there’s nothing we can do about it” when asked what they intend to do about these “hallucinations,” as they call them.
I do hope you can find it! It’s especially strange that the companies all implied that there was no answer (especially considering that reducing hallucinations has been one of the primary goals over the past year!) Maybe they meant that there was no answer at the moment. Much like how the wright Brothers had no way to control the random pitching and rolling of their aircraft and had no answer to it. (Of course the invention of the aileron would fix that later.)
deleted by creator
Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally removed at this point.
Do you think AI is supposed to be useful?!
Its sole purpose is to generate wealth so that stock prices can go up next quarter.
I WANT to believe:
People are threatening lawsuits for every little thing that AI does, whittling it down to uselessness, until it dies and goes away along with all of its energy consumption.
REALITY:
Everyone is suing everything possible because $$$, whittling AI down to uselessness, until it sits in the corner providing nothing at all, while stealing and selling all the data it can, and consuming ever more power.
Doesn’t even need to generate actual wealth, as speculation about future wealth is enough for the market.
Same thing.
deleted by creator
If you’re asking an LLM for advice, then you’re the exact reason they need to be taught to redirect people to actual experts.
Then they weren’t that useful to begin with.
Image for a second that I said you shouldn’t pull teeth with a wrench.
Your response would’ve been equally appropriate.
Wrenches are absolutely awesome at applying torque. What are LLM’s absolutely awesome at? I can’t come up with anything except producing convincing slop en masse.
I think you’re missing the subtle distinction between “can” and “should.”
To answer your question, I have friends that find them entertaining, and at least one who uses them in projects to do stuff, but don’t know the details. Have you considered that something you don’t understand might not be useless and evil? Your personal ignorance says nothing about a subject.
I’m not about to call myself the end all be all expert on LLM’s, but I’m a 20 year IT veteran in system administration and I keep up with tech news daily. I am the perfect market for new tech: I have a lot of disposable income, I’m tech obsessed and always looking for optimisations in my job as well as in my personal life. Yet outside of summaries (and even there I wouldn’t trust them) and boilerplate code that I could’ve copypasted from stack overflow I can’t think of a good reason to burn as much energy and money as the purveyors of LLM’s are. The ratio between expense and gains is WAY out of whack for these things and I’ll bet the market will correct itself in the not too distant future (in fact I have, I’m shorting NVDA).
I understand what these plausible next word generators are and how they work in broad strokes. Have you considered that you can’t tell what someone does or doesn’t understand by a comment?
By the way, you’re smarmy enough to tell me I shouldn’t be asking LLM’s for advice, but in the same thread you’re asking how to run a local unrestricted LLM to ask for not-entirely-legal advice? Funny that.
I have an idea for a project that requires a suppliment to my utterly inadequate creative writing skills, and I have had abysmal luck finding a co-author. I don’t want to use the LLMs available online because I have learned not to rely on a tool that’s could disappear without notice. The part about it being potentially illegal was a joke and nothing more.
That’s entirely fair. I’m annoyed today, and the reply about the wrench just made it worse. My apologies.
I agree with the sentiment but as an autistic person I’d appreciate it if you didn’t use that word
EDIT: downvotes? Come on, lemmy, what gives? If this had been an anti-trans slur you’d have already grabbed your pitchforks!
I’ve seen a big uptick in that word usage, I don’t like seeing them and use a replacing extension to intercept and censor them to a more appropriate word, while showing an asterisk so I know it was censored. Now I don’t have to see the word, but I still get to see who is being a bigoted jerk.
Edit: ya so I guess on lemmy people think it’s cool to throw ableist slurs.
Great advice. I always consult FDA before cooking rice.
You may not, but the company that packaged the rice did. The cooking instructions on the side of the bag are straight from the FDA. Follow that recipe and you will have rice that is perfectly safe to eat, if slightly over cooked.
Can’t help but notice that you’ve cropped out your prompt.
Played around a bit, and it seems the only way to get a response like yours is to specifically ask for it.
Honestly, I’m getting pretty sick of these low-effort misinformation posts about LLMs.
LLMs aren’t perfect, but the amount of nonsensical trash ‘gotchas’ out there is really annoying.
deleted by creator
Here’s my first attempt at that prompt using OpenAI’s ChatGPT4. I tested the same prompt using other models as well, (e.g. Llama and Wizard), both gave legitimate responses in the first attempt.
I get that it’s currently ‘in’ to dis AI, but frankly, it’s pretty disingenuous how every other post about AI I see is blatant misinformation.
Does AI hallucinate? Hell yes. It makes up shit all the time. Are the responses overly cautious? I’d say they are, but nowhere near as much as people claim. LLMs can be a useful tool. Trusting them blindly would be foolish, but I sincerely doubt that the response you linked was unbiased, either by previous prompts or numerous attempts to ‘reroll’ the response until you got something you wanted to build your own narrative.
deleted by creator
I love this lmao
When chatgpt calls you the rizzler you know we living in the future
deleted by creator
So do have like a mastodon where you post these? Because that’s hilarious
deleted by creator
Especially since the stats saying that they’re wrong about 53% of the time are right there.
That’s right around 9% lower than the statistic that 62% of all statistics on the Internet are made up on the spot!
I wish I had the source on hand, but you’ll just have to trust my word - after all, 47% of the time, it’s right 100% of the time!
Joking aside, I do wish I had the link to the study as it was cited in an article from earlier this year about AI making stuff up even when it cited sources (literally lying about what was in the sources it claimed it got the info from) and how the companies behind these AI collectively shrugged their shoulders and said “there’s nothing we can do about it” when asked what they intend to do about these “hallucinations,” as they call them.
I do hope you can find it! It’s especially strange that the companies all implied that there was no answer (especially considering that reducing hallucinations has been one of the primary goals over the past year!) Maybe they meant that there was no answer at the moment. Much like how the wright Brothers had no way to control the random pitching and rolling of their aircraft and had no answer to it. (Of course the invention of the aileron would fix that later.)
Honestly? Good.
Better chat models exist w
This one even provides sources to reference.