You might’ve noticed that ChatGPT — and AI in general — isn’t good at math. There’s a reason, and it has to do with how modern AI is built.

Basically, they’re autocorrect on steroids. Which some of us have been saying for, like, ages.

  • baldingpudenda@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    9 hours ago

    Why can’t this person working on his linguist degree not able to do high level math? It’s not their specialty.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      9 hours ago

      To be fair, even someone with a linguist degree knows basic math. GPT can’t even get that right. That’s the biggest problem (and red flag).

      • Zexks@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        2 hours ago

        It can absolutely do basic math. Give us a ‘basic’ math question that it can’t solve.

    • snooggums@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      7 hours ago

      Programming languages are structured and have rigid syntax that fits well in a LLM model, so it spitting out working code for simple things is like having a sentence that is structured like a normal person.

      The code might not do what you are actually trying to do, or might work while being inefficient, even if it runs.