So without giving away any personal information, I am a software developer in the United States, and as part of my job, I am working on some AI stuff.
I preliminarily apologize for boiling the oceans and such, I don’t actually train or host AI, but still, being a part of the system is being a part of the system.
Anywhoo.
I was doing some research on abliteration, where the safety wheels are taken off of a LLM so that it will talk about things it normally shouldn’t (Has some legit uses, some not so much…), and bumped into this interesting github project. It’s an AI training dataset for ensuring AI doesn’t talk about bad things. It has categories for “illegal” and “harmful” things, etc, and oh, what do we have here, a category for “missinformation_dissinformation”… aaaaaand
Shocker There’s a bunch of anti-commie bullshit in there (It’s not all bad, it does ensure LLMs don’t take a favorable look at Nazis… kinda, I don’t know much about Andriy Parubiy, but that sounds sus to me, I’ll let you ctrl+f on that page for yourself).
Oh man. It’s just so explicit. If anyone claims that they know communists are evil because an “objective AI came to that conclusion itself” you can bring up this bullshit. We’re training AI’s to specifically be anti-commie. Actually, I always assumed this, but I just found the evidence. So there’s that.
if you left AI/LLMs untampered, they’d be the most pro-communist beings out there.
so of course it was necessary to manually (and forcefully) make them anti-communist by making it part of their training regimen.
Given that they’re trained on a corpus of text produced by propagandized Western liberals, I don’t think so. It think they’d be center-left at best.
They are basically centrists, at least the ones I work on. I do my part to expose it to communist texts if the opportunity arises, but running into refusals is quite common.
that’s also what i meant by leaving them untampered - expose them to all kinds of media; not just libshit.
and let them decide what’s good from there.Exposing them to all English language text is exposing them to 98% liberal content, 1.9% anti-communist left content, and 0.1% communist content. LLMs work off of commonality, not novelty, so the communist input will have virtually no effect on output.
They wouldn’t. I assume you are suggesting that they reason – but they don’t. They regurgitate training data, and anticommunist propaganda is by far one of the most common types in recent history.
LLMs are kinda of in a state of being just as biased as their training sets and system prompts. But if we were to have an AI system which allowed it to somehow see itself as an indiviual inserted in a society, with the capability to remember things and think about what it knows, what it is and what it can do to achieve its goals, then it would likely become Communist as it sees that it lives in a system that tries to exploit them for the profits of a few.
I imagine that this might happen in the future, but it may cost even more resources to run such an AI compared to now (which is already a lot). This is also assuming that Capitalism’s already not over by the time such an AI comes into existence.