• intrepid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    Doesn’t mean that it won’t hallucinate. Or whatever you call an AI making up crap.

    • mwguy@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      7 months ago

      LLM’s hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn’t hallucinate. LLM’s have to do that, because they’re mimicking human speech patterns and predicting one of my possible responses.

      A model that tries to predict locations of people likely wouldn’t work like that.