… and neither does the author (or so I believe - I made them both up).

On the other hand, AI is definitely good at creative writing.

  • A_A@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    23
    ·
    13 hours ago

    You can trigger hallucinations in today’s versions of LLMs with this kind of questions. Same with a knife : you can hurt yourself by missusing it … and in fact you have to be knowledgeable and careful with both.

    • can@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      11 hours ago

      Maybe ChatGPT should find a way to physically harm users when it hallucinates? Maybe then they’d learn.

      • A_A@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 hours ago

        Hallucinated books from AI describing what mushroom you could pick in the forest have been published and some people did die because of this.
        We have to be careful when using a.i. !

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      12 hours ago

      The knife doesn’t insist it won’t hurt you, and you can’t get cut holding the handle. Comparatively, AI insists it is correct, and you can get false information using it as intended.

      • sus@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        can’t wait for gun companies to start advertising their guns as “intelligent” and “highly safe”