David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 month agoLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comexternal-linkmessage-square98fedilinkarrow-up1189arrow-down10
arrow-up1189arrow-down1external-linkLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 month agomessage-square98fedilink
minus-squareebu@awful.systemslinkfedilinkEnglisharrow-up24·1 month ago because it encodes semantics. if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical
if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical