‘Overhyped’ generative AI will get a ‘cold shower’ in 2024, analysts predict::Analyst firm CCS Insight predicts generative AI will get a “cold shower” in 2024 as concerns over growing costs replace the “hype” surrounding the technology.
Honest title: lazy analyst pretends to be smart recycling an overused Gartner graph
an overhyped thing won’t be as hyped in the near future?
who would’ve thunk
We’re getting customers that want to use LLMs to query databases and such. And I fully expect that to work well 95% of the time, but not always, while looking like it always works correctly. And then you can tell customers a hundred times that it’s not 100% reliable, they’ll forget.
So, at some point, that LLM will randomly run a complete non-sense query, returning data that’s so wildly wrong that the customers notice. And precisely that is the moment when they’ll realize, holy crap, this thing isn’t always reliable?! It’s been telling us inaccurate information 5% of the usages?! Why did no one inform us?!?!?!
And then we’ll tell them that we did inform them and no, it cannot be fixed. Then the project will get cancelled and everyone lived happily ever after.
Or something. Can’t wait to see it.
Would you trust a fresh out of college intern to do it? That’s been my metric for relying on LLM’s
Yup this is the way to think about LLMs, infinite eager interns willing to try anything and never trusting themselves to say “I dont know”
It might actually help the intern if they use it:
I’ve been speculating that people raving about these things are just bad at their jobs for a bit, I’ve never been able to get anything useful out of an llm.
If you have a job that involves diagnosing, or a wide array of different problems that change day to day it’s extremely useful. If you do the same thing over and over again it may not be as much.
You’re right but it’s worse than that. I have been in the game for decades. One bum formula and the whole platform loses credibility. There isn’t a customer on the planet who’ll look at us as 5%.
Seeing people say they’re saving lots of time with LLMs makes me wonder how much menial busywork other people do relative to myself. I find so few things in my day where using these tools wouldn’t just make me a babysitter for a dumb machine.
It’s great for programming and writing formal messages. I never know where to get started on messages so I give the AI a summary of what I’m trying to say. That gives me a very wordy base to edit to my liking.
It’s great for writing latex.
latexify sum i=0 to n ( x_i dot (nabla f(x)) x e_r) = 0
\[ \sum_{i=0}^{n} \left( x_i \cdot (\nabla f(x)) \times e_r \right) = 0 \]
Also great at postioning images and fixing weird layout issues.
You don’t need a LLM for converting pseudo code to Latex. LLMs surely help at programming (in my experience), but I feel like your example is really giving them justice :p
Yeah… as a Product Manager, dealing with a lot of text based tasks, I really expected to find it more useful than I actually have. I’ve not really been able to use it for writing documentation and sending emails, because it matters to me what is in those and I have something I want to say in them.
The only way I could really consider offloading these tasks to AI is if I just stopped caring what went in them.
Depends on what you do. I personally use LLMs to write preliminary code and do cheap world building for d&d. Saves me a ton of time. My brother uses it at a medium-sized business to write performance evaluations… which is actually funny to see how his queries are set up. It’s basically the employee’s name, job title, and three descriptors. He can do in 20 minutes what used to take him all day.
Well that just sounds kind of bad… I hadn’t even considered generating a performance review for my direct report. It’s part of my job to give them meaningful feedback and help them improve, not just tick a box.
Regardless of what anyone says, I think this is actually a pretty good use case of the technology. The specific verbiage of a review isn’t necessarily important, and ideas can still be communicated clearly if tools are used appropriately.
If you ask a tool like ChatGPT to write “A performance review for a construction worker named Bob who could improve on his finer carpentry work and who is delightful to be around because if his enthusiasm for building. Make it one page.” The output can still be meaningful, and communicate relevant ideas.
I’m just going to take a page from William Edwards Deming here, and state that an employee is largely unable to change the system that they work in, and as such individual performance reviews have limited value. Even if an employee could change the system that they work in, this should be interpreted as the organization having a singular point of failure.
If all the manager is going to input into the process is, at best, some bullet points then they should just stop pretending and send their employee the bullet points. Having some automatically generated toss around it makes the process even more ridiculous than it can already easily be.
If my manager gave me my performance review and it was some meaningless auto-praise/commentary, structured around the actual keywords they wanted to express to me, then I would think less of them. I would no longer value the input of my manager or their interest in my development.
There’s nothing wrong with being concise, and my upcoming review for my report will be clear and concise without generating fluff around it. I’m not asking them to change the system, I’m asking them to either maintain or change themselves, depending on the feedback I’m giving, it’s for them.
That’s kinda why I bring up Deming and his views of the entire purpose of a quality management system. “they should just stop pretending and send their employee the bullet points.” I couldn’t agree more. My bro is sending out the bullet points, but AI is formatting it, so it is acceptable to his boss.
In an ideal world, there’d be someone who actually examined the business operation to determine what the benefits of doing individual performance reviews are. Instead, things at his work are done a certain way simply because that’s the way they’ve always been done… and thus, that’s what he’s doing.
“I’m not asking them to change the system…” That’s not really what I meant, I apologize if i phrased what I said weird. If you’re evaluating a person, then they’re already probably not too far to any extreme. If they were the worst employee ever, you would let them go. If they were the best employee ever, your company would be dependent on them and would suffer if they voluntarily decided to leave. Your ideal employee would, therefore, be somewhere within the norm and would need to conform to your system. An individual review exists simply to enforce this conformity, and the reality of the situation is that most employees’ true output is directed more by the operational efficiency of the business than an individuals own actions. If an employee is already conforming, then the review is effectively useless.
Anyways, I’m kinda droning on, but I think the horses have already left the barn with AI. I think the next logical step for many businesses is to really evaluate what they do and why they do it at an administrative level… and this is a good thing!
If you’re using AI to generate performance reviews you’re shortchanging your reports and your company. That guy’s brother sounds like a shitty boss.
What your brother is doing is a pretty good example of why this stuff needs to be regulated better. People’s performance evaluations are not the kind of thing that these tools are equipped to do properly.
The manager could have metrics that they feed the AI so it can make it wordy. Some people really like to read a whole page of text about how they are doing.
I use them to make worksheets for middle school students and to quickly write lesson plans from ideas.
I use AI all the time in my work. With one of my tools I can type in a script and have a fully-acted, fully-voiced virtual instructor added to the training we create. Saves us massively in both time and money and increases engagement.
This is how AI will truly sweep through the market. Small improvements, incrementally developed upon, just like every other technology. White collar workers will be impacted first, with blue collar workers second, as the technology continues to develop.
My friend is an AI researcher as part of his overarching role as an analyst for a massive insurance company, and they’re developing their own internal LLM. The things AI can do will be absolutely market-shattering over time.
Anyone suggesting AI is just a fad/blip is about as naive as someone saying that about the internet in 1994, in my view.
2024 headline: “Analyst replaced by generative AI”
In the mean time, I’m using chat gpt at work every day now and I’m able to work much faster because of it.
To me it’s next generation search engine. For tech queries it’s correct a lot.
Pretty much every tech question I ask it it just refers the answer to the “Your IT Administrator” which isn’t helpful.
Unfortunately that hasn’t been my experience, but I’m only using it to find answers for things a couple ddg queries won’t solve because traditional search engines are so much faster
Yeah I think it depends so much on context. For my tech queries it’s usually spot on.
Once it stops giving non-existent powershell commands, I’ll give it another go, but for now it has wasted enough of my time.
I’m finding it useful for detecting / correcting really simple mistakes, syntax errors and stuff like that.
But I’m finding it mostly useless for anything more complicated.
I’m an analyst too, and I analyse that you’re gonna go fuck yourself.
Preacher: Can you read, my son?
Bubbles: Well that depends; can you go fuck yourself?