ylai to Not The Onion@lemmy.worldEnglish · 11 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square54fedilinkarrow-up1237arrow-down114cross-posted to: becomeme@sh.itjust.workstechnology@lemmygrad.mlfuturology@futurology.todaytechnology@lemmy.worldartificial_intel
arrow-up1223arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comylai to Not The Onion@lemmy.worldEnglish · 11 months agomessage-square54fedilinkcross-posted to: becomeme@sh.itjust.workstechnology@lemmygrad.mlfuturology@futurology.todaytechnology@lemmy.worldartificial_intel
minus-squarePlopp@lemmy.worldlinkfedilinkEnglisharrow-up25·11 months agoHere’s a wild thought. Maybe that’s why the chat bot (I assume LLM) does it too, because it’s been trained on us! 🤯
minus-squareMalfeasant@lemmy.worldlinkfedilinkEnglisharrow-up2·11 months agoI learned it from watching you!
Here’s a wild thought. Maybe that’s why the chat bot (I assume LLM) does it too, because it’s been trained on us! 🤯
I learned it from watching you!