The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

  • QuillcrestFalconer [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    7 months ago

    Eventually researchers are going to realize (if they haven’t already) that there’s massive amounts of untapped Data being unrecorded in virtual experiences.

    They already have. A lot of robots are already training using simulated environments, and nvidia is developing frameworks to help accelerate this. Also this is how things like alpha go were trained, with self-play, and these reinforcement learning algorithms will probably be extended for LLMs.

    Also like you said there’s a lot of still untapped data in audio / video and that’s starting to be incorporated into the models.