It is like compression… but backwards ^^
And lossy!
now that you mention it…
the real question is how long before we just have automated agents sending corporate emails to one another without any human in the loop 🤣
Its 100% already happening.
Minus 10 years.
Most corporate communications are unnecessarily fluffy to begin with because it makes it look like more work was done. Most of the time I don’t even understand why I’m explaining something and it feels like the only requirement is to have words on a page.
Sometimes the only requirement IS to have words on a page. Think about a disaster recovery plan, for example. Now, you probably don’t want an LLM to write your disaster recovery plan, but it’s a perfect example of something where the main value is that you wrote it down, and now you can be certified that you have one.
I just asked GPT to create a disaster recovery plan for a ransomware attack, and actually the information it gave wasn’t wrong or bad. But it’s also very generic, and it will rarely/never tell you correctly the specifics to your applications or where to click.
deleted by creator
there’s a whole book on the subject of bullshit jobs incidentally https://en.wikipedia.org/wiki/Bullshit_Jobs
beware! soon, it will be able to turn that long email into a meeting!
And another GPT will participate in it for me. Good.
“Didja hear, Jeff had a heart attack.”
“Wait… Jeff was a real person this entire time?”
Something is wrong, why do AIs get to spend all their time writing and painting while we have to go to work every day?
This is a legitimate use case for LLM, though.
Not everyone can communicate clearly. Not everyone can summarize well. So the panel on the right is great for the people on the other end, who must read your poorly-communicated thoughts.
At the same time, some things must look like you put careful thought and time into your words. Hence, the panel on the left.
And if people on both sides are using the tool to do this, who’s really hurt by that?
Yes, but there is a real risk here that either the expansion added false details or the summary is wrong, especially the summary.
deleted by creator
It’s not about formality. It is about the introduction of error. Less strict communication is more likely to have such errors introduced.
The AI arms race has begun!
Isn’t this kinda thing happening already in the recruitment industry?
pretty sure stuff like resume screening is done using machine learning nowadays