ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    11 months ago

    This is a fucking terrible study.

    They compare their results to a general diagnostic evaluation of GPT-4 which scored better and discuss it as relating to the fact it’s a pediatric focus.

    While largely glossing over the fact they are using GPT-3.5 instead.

    GPT-3.5 sucks for any critical reasoning tasks, and this is a pretty worthless study not using the SotA or using best practices in prompting to actually reflect what a production grade deployment of a LLM for pediatric diagnostics would be.

    And we really need to stop just spamming upvotes for stuff with little actual worth just because it’s a negative headline about AI and that’s all the jazz these days.