• jsomae
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Transformers are not built with our knowledge of language. That’s a gross approximation – it would honestly be more accurate to say they’re modelled after the human brain than that they’re built with our understanding of language. A big problem is that the connection between AI and language is poorly understood – we can’t even understand what the word2vec axes are.

    • chayleaf
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      i’m not talking about knowing about how humans perceive/learn languages, i’m talking about language structure. Perhaps it’s wrong to call it “how languages work”

      • jsomae
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        That’s what I meant, yes. They’re not built based on any linguistic field

        • chayleaf
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          5 months ago

          different neural network types excel at different tasks - image recognition was invented way before LLMs, not only for lack of processing power, but also because the previous architectures didn’t work with languages. New architectures don’t appear out of thin air, they are created with a rough idea of what we could need to make the network do a certain task (e.g. NLP) better. Even tokenization isn’t blind codepoint separation but is based on an analysis of languages. But yes, natural languages aren’t “parsed” for neural networks, they don’t even have a formal grammar.