• Mikina@programming.dev
    link
    fedilink
    arrow-up
    41
    arrow-down
    3
    ·
    7 months ago

    Duh, it’s a ML algorithm that requires an enormous amount of feedback. It can’t get smarter than humans, because then there’s no one, or no data, who can tell if what it’s spewing is really clever or just nonsense.

    I hate what happened to common perception of “AI”. The whole amazing field of machine learning has been reduced to overhyped chatbots, with so many misconceptions repeated even by experts who should know better.

    • doodledup@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      4
      ·
      7 months ago

      It can get smarter than every individual human because individuals are always less smart than a large collective and the LLMs train on the collective data of the internet.

      • commandar@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        7 months ago

        “Smarter” is the wrong way to look at it. LLMs don’t reason. They have limited ability to contextualize. They have no long term memory (in the sense of forming conclusions based on prior events).

        They potentially have access to more data than any individual human and are able to respond to requests for that data quicker.

        Which is a long way of saying that they can arguably be more knowledgeable about random topics, but that’s a separate measure from “smart,” which encompasses much, much more.

      • RupeThereItIs@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        7 months ago

        Except it’s dragged down by the average and sub average humans who’s data it’s trained on.

        So it’s maybe smarter then the average, MAYBE.