David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 23 days agoLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comexternal-linkmessage-square98fedilinkarrow-up1189arrow-down10
arrow-up1189arrow-down1external-linkLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 23 days agomessage-square98fedilink
minus-squareebu@awful.systemslinkfedilinkEnglisharrow-up24·23 days ago because it encodes semantics. if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical
if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical