• TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    5 months ago

    You are like, science denial-ism level of ignorance when it comes to this conversation, or perhaps, it might be that you don’t actually understand the underlying philosophy of scientific inquiry to understand why LLMs basically were able to break the back of both UH and innate acquisition.

    You seem like the kind of person who cheer-leads “in the spirit of science” but doesn’t actually engage in it as a philosophical enterprise. You didn’t seem to notice the key point that @Not_mikey@slrpnk.net made, which is that unlike UG, LLM’s are actually testable. That’s the whole thing right there, and if you don’t get the difference, that’s fine, but it speaks to your level of understanding to how one actually goes about conducting scientific inquiry.

    And if you want to talk about incurious:

    You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLM’s do not “understand” their outputs or their inputs. If we feed them nonsense, they output nonsense. There’s no underlying semantics whatsoever. LLM’s are a mathematical model.

    Specifically “There’s no underlying semantics whatsoever” is the key lynch pin that UG demands that LLM’s demonstrate are not strictly necessary. Its exactly why Chomskys house of cards crumbles with the the counter-factual to UG/ Innate acquisition that LLM’s offer. I had a chance to ask him this question directly about 6 months prior to that op-ed being published. And he gave a response that’s about as incurious about why LLM, and basically, big complex networks in general, are able to learn as you’ve offered here. His was response was basically the same regurgitation on UG and innate acquisition that he offers in the op-ed. And the key point is that yes, LLM’s are just a big bucket of linear algebra; but they represent an actually testable instrument for learning how a language might be learned. This is the most striking part of Chomskys response and I found it particularly galling.

    And it is interesting that yes, if you feed LLM’s transformers (I’m going to start using the right term here: transformers) unstructured garbage, you get unstructured garbage out. However, if there is something there to learn, they seem to be at least some what effective at finding it. But that occurs in non-language based systems as well, including image transformers, transformers being used to predict series data like temperature or stock prices, even even DNA and RNA sequences. We’re probably going to be having transformers capable of translating animal vocalizations like whale and dolphin songs. If you have structured series data, it seems like transformers are effective at learning patterns and generating coherent responses.

    Here’s the thing. Chomsky UG represented a monolith in the world of language, language acquisition and learning, and frankly, was an actual barrier to progress in the entire domain, because we now have a counter factual where learning occurs and neither UG or innate acquisition are necessary or at all relevant. Its a complete collapse of the ideas, but its about as close as we’ll get because at least in one case of language acquisition, they are completely irrelevant.

    And honestly, if you can’t handle criticism of ideas in the sciences, you don’t belong in the domain. Breaking other peoples ideas is fundamental the process, and its problematic when people assume you need some alternative in place to break someone elses work.

    • yeahiknow3@lemmings.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      5 months ago

      the key lynch pin that UG demands that LLM’s demonstrate are not strictly necessary

      You know what, I’m going to be patient. Let’s syllogize your argument so everyone can get on the same page, shall we.

      1. LLM’s have various properties.
      2. ???
      3. Therefore, the UG hypothesis is wrong.

      This argument is not valid, because it’s missing at least one premise. Once you come up with a valid argument, we can debate its premises. Until then, I can’t actually respond, because you haven’t said anything substantive.

      The mainstream opinion in linguistics is that LLM’s are mostly irrelevant. If you believe otherwise — for instance, that LLM’s can offer insight into some abstract UG hypothesis about developmental neurobiology — explain why, and maybe publish your theory for peer review.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        edit-2
        5 months ago

        You don’t need to put project a false argument onto what I was saying.

        Chomsky’s basic arguments:

        1: UG requires understanding the semantic roles of words and phrases to map syntactic structures onto semantic structures.

        2: UG posits certain principles of grammar are universal, and that syntactic and semantic representation is required as meaning changes with structure. The result is semantic universals - basic meanings that appear across all languages.

        3: Semantic bootstrapping is then invoked to explain where children using their understanding of semantic categorizes to learning syntactic structures of language.

        LLM’s torpedo all of this as totally unnecessary as fundamental to language acquisition, because they offer at least one example where none of the above need to be invoked. LLM’s have no innate understanding of language; its just pattern recognition and association. In UG semantics is intrinsically linked to syntactic structure. In this way, semantics are learned indirectly through exposure, rather than through an innate framework. LLM’s show that a UG and all of its complexity is totally unnecessary in at least one case of demonstrated language acquisition. That’s huge. Its beyond huge. It gives us a testable, falsifiable path forwards that UG didn’t.

        The mainstream opinion in linguistics is that LLM’s are mostly irrelevant.

        Largely, because Chomsky. To invoke Planck’s principle: Science advances one funeral at a time. Linguistics will finally be able to evolve past the rut its been in, and we now have real technical tools to do the kind of testable, reproducible, quantitative analysis at scale. We’re going to see more change in what we understand about language over the next five years than we’ve learned in the previous fifty. We didn’t have anything other than baby humans prior to now to study the properties of language acquisition. Language acquisition in humans is now a subset of the domain because we can actually talk about and study language acquisition outside of the context of humans. In a few more years, linguistics won’t look at-all like it did 4 years ago. If departments don’t adapt to this new paradigm, they’ll become like all those now laughable geography departments that didn’t adapt to the satellite revolution of the 1970s. Funny little backwaters of outdated modes of thinking the world has passed by. LLM’s for the study of language acquisition is like the invention of the microscope, and Chomsky completely missed the boat because it wasn’t his boat.

        • yeahiknow3@lemmings.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          5 months ago

          Your conclusion (which I assume is implied, since you didn’t bother to write it anywhere) might be something like,

          • Mathematical models built on enormous data sets do a good job of simulating human conversations (LLMs pass the Turing test)… THEREFORE, homo sapiens lack an innate capacity for language (i.e., the UG Hypothesis is fundamentally mistaken).

          My issue is that I just don’t see how to draw this conclusion from your premises. If you were to reformulate your premises into a valid argument structure, we can discuss them and find some common ground.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            5 months ago

            You haven’t demonstrated that you have any real comprehension of the domain, or that you bring anything interesting enough to this conversation to warrant furtherance.

            • yeahiknow3@lemmings.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              5 months ago

              Harsh words for someone who can’t even state a valid argument. I mean do you expect me to guess how your conclusion comes from your unrelated premises?

              1. Roses are red.
              2. Violets are blue.
              3. An LLM passed the Turing test.
              4. Therefore, humans lack an innate language capacity.
              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                5 months ago

                I’ve been both cogent and clear as to what my points are, and you’ve made none. You are a joke if you think yourself an intellectual.

                • yeahiknow3@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  edit-2
                  5 months ago

                  You cogently failed to produce a valid argument. I can’t even engage with your claims because they are unrelated to your conclusion.

                  • TropicalDingdong@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    5
                    ·
                    5 months ago

                    The saddest thing about your responses, in spite of their multiple edits, is that you think you are actually serious in whatever it is you think you are doing.

                    Its disappointing because you can’t actually do this thing which you wish you were capable of. You can only imitate it, and in doing so, you mock both yourself and the thing you appear to revere so much.

                    You could just actually engage with the points being made, but I think we both know you aren’t capable. So you resort to self-fellatio. And its sad, because its not just you, but an entire generation of pseudo-intellectuals who almost know how to have a complex discussion on difficult topics. But when your favorite comic book hero gets called out for pushing a unfalsifiable theory, that basically held the field captive for 50 years, you get all tied up in knots. Its because you aren’t actually engaging with the material intellectually, but emotionally.