ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • Elderos@lemmings.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I think it is still pretty decent at “coding”, but it used that I could throw almost anything at it about virtually anything and most of the time there was no hallucinations if the prompt was good. It has always been prone to beong influenced by biased prompts so you had.to work around that.

    Lately I feel like I really need to draw out answers out of it. You have to insist to get past the cookie-cutter reponses and all the newly added “I am not a specialist therefore I refuse to answer”. Historical questions are often refused too if deemed offensives. But really, its been unable to give me all sort of general facts like it used too by lying about the cutoff date of 2021, even after being corrected and apologizing. Some answers are purely 100% hallucinations. Its never been perfect but it used to be better than googling, well not anymore imo. It is still probably decent for non-historical questions, like coding.

    • SolNine
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      My experience mirrors this, though I don’t use it for much beyond a specific use case… I’m wondering why they neutered it so much?