ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • PeleSpirit@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    13
    ·
    1 year ago

    Because if it’s able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn’t that how it could/is being used, to test stuff?

    • Ranessin@feddit.de
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      It doesn’t check the stuff it generates other than on grammatical and orthographical errors. It’s not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn’t know what it contains in a way something intelligent would.

      • PeleSpirit@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        1 year ago

        It seems like it could check for that though, which is what chatgpt doesn’t do but we all assumed would. I’m sure there are ai programs that could and do check for possibilities on only information we know to be true.

    • stephen01king@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      If you want an AI that can create cancer treatment, you need to train it on creating cancer treatment, and not just use one that is trained on general knowledge. Even if you train it on science publications, all it can now reliably do is mimic a science journal since it has not been trained on how to parse the knowledge in the journal itself.

      • amki@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Which is exactly the problem people think has been solved but isn’t anywhere near being solved. It cannot comprehend semantics, the meaning of things is completely beyond it and all other AIs.

        Unfortunately saying I made a thing that creates vaguely human looking speech with little content isn’t astonishing to most people hence they are looking for something useful this breakthrough machine must be able to do and then they don’t find anything leading to these articles.

      • PeleSpirit@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Right, but can’t they tell it to also try thousands and thousands of combos that humans could never do? I think ChatGPT is both super amazing and as stupid as a rock at the same time. I thought the vaccine used an AI to do that. I’m obviously clueless, I’m seriously asking.