misk@sopuli.xyz to Technology@lemmy.worldEnglish · 2 months agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square73fedilinkarrow-up1483arrow-down113cross-posted to: apple_enthusiast@lemmy.worldarstechnica_index@rss.ponder.cat
arrow-up1470arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 2 months agomessage-square73fedilinkcross-posted to: apple_enthusiast@lemmy.worldarstechnica_index@rss.ponder.cat
minus-squarethanks_shakey_snake@lemmy.calinkfedilinkEnglisharrow-up17·2 months agoPeople working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
minus-squarezbyte64@awful.systemslinkfedilinkEnglisharrow-up2arrow-down1·2 months agoIf they know about this then they aren’t thinking of the security implications
minus-squarethanks_shakey_snake@lemmy.calinkfedilinkEnglisharrow-up2·2 months agoSecurity implications?
People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren’t thinking of the security implications
Security implications?