misk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 month agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square73fedilinkarrow-up1483arrow-down113cross-posted to: apple_enthusiast@lemmy.worldarstechnica_index@rss.ponder.cat
arrow-up1470arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 month agomessage-square73fedilinkcross-posted to: apple_enthusiast@lemmy.worldarstechnica_index@rss.ponder.cat
minus-squarethanks_shakey_snake@lemmy.calinkfedilinkEnglisharrow-up17·1 month agoPeople working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
minus-squarezbyte64@awful.systemslinkfedilinkEnglisharrow-up2arrow-down1·1 month agoIf they know about this then they aren’t thinking of the security implications
minus-squarethanks_shakey_snake@lemmy.calinkfedilinkEnglisharrow-up2·1 month agoSecurity implications?
People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren’t thinking of the security implications
Security implications?