- Post in !techtakes@awful.systems attacks the entire concept of AI safety as a made-up boogeyman
- I disagree and am attacked from all sides for “posting like an evangelist”
- I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
- Instance ban, “promptfondling evangelist”
This one I’m not aggrieved about as much, it’s just weird. It’s reminiscent of the lemmy.ml type of echo chamber where everyone’s convinced it’s one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.
Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn’t PT Barnum carefully enough, so didn’t realize.)
Is it “brigading” to ask someone to drop a polite note into the original post, inviting them to continue the conversation here? A couple of people said things I want to respond to, but of course I can’t.
awful systems is full of toxicity but they are not wrong on this. your comments there specifically fail to address the context (“ai in general” but pivot to ai is very specifically discussing the current ml methods used by these companies, especially llm). this sort of off topic posting is likely why you were percieved the way you were.
In addition, AI safety (what you put forward) is conceptually a scapegoat to avoid realistic and immediate harms from ai hype and related tech industry nonsense. the sorts of what ifs you pose largely rely on magical thinking to the benefit of companies who continue grifting and rotting everything they touch in the meantime. It also serves to imply capabilities to these technologies that are unreasonable, often absurd, and feeds into the grift. this is the exact topic the article is about, in fact.
Anyway, deprogramming the sci fi notion of ai singularity in these contexts is frought and drawn out. if all you are looking for is to keep arguing with awful systems folks, go find a better use of your time.