• JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    2 months ago

    It’s only a problem if you expect them to do formal reasoning. They are fancy word predictors, useful for when your output doesn’t need to be factually accurate. If you use them for things they’re not designed for, you’ll get bad results, but that would be your fault for using them in an incorrect manner, not the LLMs’ faults. You don’t use a screwdriver to bang in a nail and say the screwdriver ‘has a HUGE problem’ when it does a bad job.

    • Hazzard@lemm.ee
      link
      fedilink
      arrow-up
      15
      ·
      2 months ago

      I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.

      On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.

      As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.

    • geekwithsoul@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      2 months ago

      The problem is the laymen expect it to do reasoning, so the sales & marketing team says that it can do reasoning, and then the CEO will have consumed the Kool-Aid and restructure the company because he believes it can do reasoning.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Right, I find LLMs are fundamentally no different from Markov chains. It doesn’t mean they’re not useful, they’re a tool that’s good for certain use cases. Unfortunately, we’re in a hype phase right now where people are trying to apply them for a lot of cases they’re terrible at and where better tools already exist to boot.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        they aren’t. The only difference is that the state transition table is so unimaginably gargantuan thit we can only generate an approximation of a tiny slice of it, instead of it being literally a table

  • slacktoid
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    This knife is a bad hammer… I wonder why?

  • Letstakealook@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    And yet people will continue to argue that llms are demonstrating understanding and problem solving. This shit is just Eliza on steroids. I’m not saying it didn’t require skill or knowledge to create, but it is in no way close to what it is being billed as.

      • m_f@midwest.social
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        6
        ·
        2 months ago

        Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.

        The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:

        • There doesn’t seem to be any reason to believe that there’s a ceiling on scaling up
        • LLM’s reasoning abilities improve with scale (notice that the example they use for kiwis they included the answers from o1-mini and llama3-8B, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)
        • Techniques such as RAG and Chain of Thought help immensely on many problems
        • Basic prompting techniques help, like “Make sure you evaluate the question to ignore extraneous information, and make sure it’s not a trick question”
        • LLMs are smart enough to use tools. They can go “Hey, this looks like a math problem, I’ll use a calculator”, just like a human would
        • There’s a lot of research happening very quickly here. For example, LLMs improve at math when you use a different tokenization method, because it changes how the model “sees” the problem

        Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          arrow-up
          9
          ·
          2 months ago

          Actually we do know that there are diminishing returns from scaling already. Furthermore, I would argue that there are inherent limits in simply using correlations in text as the basis for the model. Human reasoning isn’t primarily based on language, we create an internal model of the world that acts as a shared context. The language is rooted in that model and that’s what allows us to communicate effectively and understand the actual meaning behind words. Skipping that step leads to the problems we’re seeing with LLMs.

          That said, I agree they are a tool, and they obviously have uses. I just think that they’re going to be a part of a bigger tool set going forward. Right now there’s an incredible amount of hype associated with LLMs. Once the hype settles we’ll know what use cases are most appropriate for them.

          • m_f@midwest.social
            link
            fedilink
            arrow-up
            3
            ·
            2 months ago

            The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.

            We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.

            • ☆ Yσɠƚԋσʂ ☆OP
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              I’m saying that the medium of text is not a good way to create a world model, and the problems LLMs have stem directly from people trying to do that. Just because autocomplete produces results that look fancy doesn’t make it actually meaningful. These things are great for scenarios where you just want to produce something aesthetically pleasing like an image or generate some text. However, this quickly falls apart when it comes to problems where there is a specific correct answer.

              Furthermore, there is plenty of progress being made with DNNs and CNNs using embodiment which looks to be far more promising than LLMs in actually producing machines that can interact with the world meaningfully. This idea that GPT is some holy grail of AI seems rather misguided to me. It’s a useful tool, but there are plenty of other approaches being explored, and it’s most likely that future systems will use a combination of these techniques.