- cross-posted to:
- technology@lemmy.world
- aistuff@lemdro.id
- cross-posted to:
- technology@lemmy.world
- aistuff@lemdro.id
I really wonder when are we going to see terrorist organizations recruiting with AI
Before I was suspended on Reddit, I had several unsolicited encounters with chatbots that were being tested there. Most were pretty obvious, and couldn’t carry on a conversation beyond a few exchanges. However, the techniques were quite concerning, as they all seemed to be targeting users that were known influencers, sometimes in weird echo-chambery ways. I believe it was early testing of personalized psychological warfare bots, to shape the infospace.
Fast forward to today, and Military A.I. chatbots are likely being used to create terrorist cells using these techniques in mass, but directed at the most vulnerable amongst us: addicts, mentally disabled, etc. When China supposedly hacked Anthem, it was the first attack against the U.S. to collect as much health/mental health data on people, to create target lists.
It’s a scaled up version of what Trump/Bannon/Nix did with Cambridge Analytica creating an incel army on Reddit during the 2016 U.S. Presidential Campaign.