• howrar@lemmy.ca
    link
    fedilink
    arrow-up
    59
    ·
    5 months ago

    We have models that are specifically made to be good at these kinds of tasks. Why would you choose the ones that aren’t and then make generalizing claims about how AI sucks in this domain?

    • spaduf@slrpnk.net
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Yeah this is probably just straight up misinformation. By no means is a diagnosis going to be made by a generalist multimodal LLM. Diagnosis is a literally a binary classification (although that is an oversimplification) and on medical CV you are optimizing on that directly.

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        11
        ·
        edit-2
        5 months ago

        They did not use a LLM.

        In a recent experiment, they set out to determine how reliable LMMs are in medical diagnosis — asking both general and more specific diagnostic questions — as well as whether models were even being evaluated correctly for medical purposes.

        Curating a new dataset and asking state-of-the-art models questions about X-rays, MRIs and CT scans of human abdomens, brain, spine and chests, they discovered “alarming” drops in performance.

        • Thorry84@feddit.nl
          link
          fedilink
          arrow-up
          21
          arrow-down
          2
          ·
          5 months ago

          You’ve quoted them stating they used LLMs while claiming they did not use a LLM? What am I missing here?

          • everett
            link
            fedilink
            arrow-up
            9
            arrow-down
            3
            ·
            5 months ago

            What am I missing here?

            “L” “M” “M”

              • blindsight@beehaw.org
                link
                fedilink
                arrow-up
                6
                ·
                5 months ago

                Correct.

                large language models (LLM) vs. large multi-modal models (LMM)

                Regardless, they both use an LLM as the main driver. Multi modal just means that the LLM is interfaced with generative and/or predictive AIs for other types of content like images, sound, video, etc.

                This is using a generalist tool for a specialized job. I’d expect the limit for LMMs is telling you if your picture is a heart or a kidney… Maybe. With low accuracy. Diagnosing? lol, hell no.

        • Starbuck@lemmy.world
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          5 months ago

          models including GPT-4V and Gemini Pro

          What a joke, a few generic LLMs making a judgement call about all AI models.

        • can@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          They used one to create the dataset for their experiments:

          In their experiments, they introduced a new dataset, Probing Evaluation for Medical Diagnosis (ProbMed), for which they curated 6,303 images from two widely-used biomedical datasets. These featured X-ray, MRI and CT scans of multiple organs and areas including the abdomen, brain, chest and spine.

          GPT-4 was then used to pull out metadata about existing abnormalities, the names of those conditions and their corresponding locations. This resulted in 57,132 question-answer pairs covering areas such as organ identification, abnormalities, clinical findings and reasoning around position.

          • snooggums@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 months ago

            The seven models tested included GPT-4V, Gemini Pro and the open-source, 7B parameter versions of LLaVAv1, LLaVA-v1.6, MiniGPT-v2, as well as specialized models LLaVA-Med and CheXagent. These were chosen because their computational costs, efficiencies and inference speeds make them practical in medical settings, researchers explain.

            It seems like this is a case of “they just aren’t using AI right, if they used it right it works” when it sure looks like they are using the models intended for these specific medical tasks.

            • spaduf@slrpnk.net
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              5 months ago

              Those are not the sort of model anybody in the field would use (medical CV with deep learning based analysis is a vibrant field with many breakthroughs in recent years). These are the sort of models tech bros are trying to sell to the public as general AI. There is a world of difference.

    • NocturnalEngineer@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Not defending this article, but companies & big tech are generalizing the crap out of AI right now, and forcing it into everything.

      They could have (and definitely should’ve) promoted the strengths and weaknesses of their models, specifically regarding what it can and can’t do. But they don’t. They get more money when their shareholders & customers think it’s the next best thing for everything.

  • ResoluteCatnap
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    5 months ago

    As others have said, you don’t need (and shouldn’t use) a LLM for a classification task like this. There are machine learning models that can handle this and identify underlying patterns that humans can not easily detect. And yes, they can get accuracy and precision scores much higher than 50%

    What an incredibly stupid article.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      5 months ago

      Correct, you shouldn’t use llm for this task.

      Which is literally the point of the paper, because various techbros have been trying to claim that they are good at these tasks.

  • Thorry84@feddit.nl
    link
    fedilink
    arrow-up
    38
    arrow-down
    3
    ·
    5 months ago

    This is pretty dumb, machine learning algorithms (fuck off with calling it AI) are especially good at seeing signs of disease in data such as xrays, CT and MRI scans. It’s the one place they really help save time and prevent mistakes. And even if it’s just to flag shit for a second opinion by a doctor and not to replace the doctor, that’s still super useful. Pattern recognition is hard and these kinds of algorithms are very good at them if provided the right source data to work off.

    If only the media and big corps would stop claiming LLMs are general AI, then maybe people would stop using them for stuff it’s clearly not good at and not meant for.

    • jsomae
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      edit-2
      5 months ago

      This isn’t dumb. This is a very good study as it is helping to remind people that these fancy new tools aren’t good at everything. The media reporting on this is doing a service.

      Edit: my bad making two responses

      • spaduf@slrpnk.net
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        5 months ago

        By casting doubt on a related but fundamentally different bit of medical tech? Yeah that’s what we need: more folks questioning medicine based on pop science understandings of the technology.

        • Umbrias@beehaw.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          5 months ago

          A study debunking the usage of llm in medicine has almost no impact on general machine learning applications in medicine. This is textbook concern trolling.

    • jsomae
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Can’t stop people calling it AI. People have called video game bots AI since the 90s, even in industry. Any algorithm is a form of artificial intelligence, really. LLMs and machine vision are multipurpose, though I agree that general-purpose is still a stretch.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        Seriously, the field of artificial intelligence has been around since the beginning of computer science, since Alan Turing founded it after coming up with the modern computer. Frankly, if you ask me, anyone complaining about LLMs being referred to as AI has been watching too many movies. AI != Human-but-metal and it never has. Going by the Wikipedia article, to be considered AI, a machine just has to perceive it’s environment and learn - degree notwithstanding.

        Of course this definition is pretty vague, so in practice AI tends to refer to the cutting edge of flexible computer algorithms. Many now-mundane algorithms much simpler than today’s LLMs (like A* and genetic algorithms) were once considered AI for their flexible logic. At some point the Internet decided that it doesn’t count unless it’s literally Jarvis, but that’s a very stingy definition of a very broad field.

      • xep@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        5 months ago

        Why wouldn’t agents in video games be AI, though? Things like are pathfinding, search, and behaviour trees are commonly used for agents in games, and in computer science these are widely considered to be artificial intelligence techniques. It’s unlikely that you would find a CS textbook calling the Fast Fourier Transform AI though, or things like Bresenham’s Line Drawing algorithm.

        • jsomae
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          Absolutely. I wouldn’t call Bresenham AI. In some contexts, like games, I might call A* search AI. But to someone from the Victorian era who paid people to compute taylor series by hand, something basic and flexible like a microprocessor which can run bresenham or FFT or etc. etc. … might have been seen as artificial intelligence. Using a machine to solve a problem that normally requires human brainpower.

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    5 months ago

    Coincidentally, I trained a CNN to tell dogs from cats and it does a godawful job diagnosing cancer

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 months ago

    LLMs are notorious yes-men. Why would you ever use that for diagnosis? Just use bespoke classifiers like we have for years.

    • Lojcs@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Because some researcher wanted to document what would happen and a journalist thought writing about that would get many clicks