What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
We need chatbots to bombard all our social media feeds with pro-western military propaganda. Otherwise, Putin and Wumao and Evil Korea and The Muslim Horde and Drumpf will win.
One of my favorite moments like this was a Reddit thread where some account was pretending to be human and arguing with people in favor of the CEO’s actions during The Purge. Then one person asked it a question about making some dangerous thing or other, and it starting replying with things like “As an AI model, I cannot explain how to do that.” and stuff. It was great.
The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance.
Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans.
However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little).
Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people.
PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
Removed by mod
What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
Removed by mod
We need chatbots to bombard all our social media feeds with pro-western military propaganda. Otherwise, Putin and Wumao and Evil Korea and The Muslim Horde and Drumpf will win.
I feel like that would just complete the Dead Internet Theory trifecta.
One of my favorite moments like this was a Reddit thread where some account was pretending to be human and arguing with people in favor of the CEO’s actions during The Purge. Then one person asked it a question about making some dangerous thing or other, and it starting replying with things like “As an AI model, I cannot explain how to do that.” and stuff. It was great.
The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
FOSS is the opposite of that.
I would say that around half of AI development is free and open source.
The techbros who want to use AI and the developers of AI aren’t quite the same group.
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.
(I may or may not have asked ChatGPT to write that.)
About as infuriating: the sheer amount of braindead morons who think LLMs are somehow in any way “AI”
Yet calling the simple rules that govern video game enemies AI is not controversial. Since when does AI have not to be fake to be called that?
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance. Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans. However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little). Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people. PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat