• lily33@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    It’s not it’s biological origins that make it hard to understand the brain, but the complexity. For example, we understand how the heart works pretty well.

    While LLMs are nowhere near as complex as a brain, they’re complex enough to make it extremely difficult to understand.

    But then there comes the question: if they’re so difficult to understand, how did people make them in the first place?

    The way they did it actually bears some similarities to evolution. They created an “empty” model - a large neural network that wasn’t doing anything useful or meaningful. But it depended on billions of parameters, and if you tweak a parameter, its behavior changes slightly.

    Then they expended enormous amount of computing power tweaking parameters, each tweak slightly improving its ability to model language. While doing this, they didn’t know what each number meant. They didn’t know how or why each tweak was improving the model. Just that each tweak was making an improvement.

    Unlike evolution, each tweak isn’t random. There’s an algorithm called back-propagation that can tell you how to tweak the neural network to make it predict some known data slightly better. But unfortunately it doesn’t tell you anything about the “why” this tweak is good, or “what” each parameter change means. Hence why we don’t understand how LLMs work.

    One final clarification: It’s not a complete black box. We do have some understanding of how LLM works, mostly on high level. Kind of like we have some basic understanding of how a brain works. We understand LLMs much better than brains, of course.