The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

  • JoeByeThen [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    ·
    7 months ago

    No, it’s not. Maybe strictly for LLMs, but they were never the endpoint. They’re more like a Frontal Lobe emulator, the rest of the “brain” still needs to be built. Conceptually, Intelligence is largely about interactions between Context and Data. We have plenty of written Data. In order to create Intelligence from that Data we’ll need to expand the Context for that Data into other sensory systems; Which we are beginning to see in the combo LLM/Video/Audio models. Companies like Boston Dynamics are already working with and collecting Audio/Video/Kinesthetic Data in the Spatial Context. Eventually researchers are going to realize (if they haven’t already) that there’s massive amounts of untapped Data being unrecorded in virtual experiences. Though I’m sure some of the delivery/ remote driver companies are already contemplating how to record their Telepresence Data to refine their models. If capitalism doesn’t implode on itself before we reach that point, the future of gig work will probably be Virtual Turks where, via VR, you’ll step into the body of a robot when it’s faced with a difficult task, complete the task, and then that recorded experience will be used to train future models. It’s sad, because under socialism there’s an incredible potential for building a society where AI/Robots and humanity live in symbiosis akin to something like The Culture, but it’s just gonna be another cyber dystopia panopticon.