ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • iforgotmyinstance@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I know university professors struggling with this concept. They are so convinced using an LLM is plagiarism.

    It can lead to plagiarism if you use it poorly, which is why you control the information you feed it. Then proofread and edit.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Another related confusion in academia recently is the ‘AI detector’. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students “I asked ChatGPT if it wrote this, and it said yes” which is just really not how it works.