- cross-posted to:
- becomeme@sh.itjust.works
- cross-posted to:
- becomeme@sh.itjust.works
The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement
No, it’s not. Maybe strictly for LLMs, but they were never the endpoint. They’re more like a Frontal Lobe emulator, the rest of the “brain” still needs to be built. Conceptually, Intelligence is largely about interactions between Context and Data. We have plenty of written Data. In order to create Intelligence from that Data we’ll need to expand the Context for that Data into other sensory systems; Which we are beginning to see in the combo LLM/Video/Audio models. Companies like Boston Dynamics are already working with and collecting Audio/Video/Kinesthetic Data in the Spatial Context. Eventually researchers are going to realize (if they haven’t already) that there’s massive amounts of untapped Data being unrecorded in virtual experiences. Though I’m sure some of the delivery/ remote driver companies are already contemplating how to record their Telepresence Data to refine their models. If capitalism doesn’t implode on itself before we reach that point, the future of gig work will probably be Virtual Turks where, via VR, you’ll step into the body of a robot when it’s faced with a difficult task, complete the task, and then that recorded experience will be used to train future models. It’s sad, because under socialism there’s an incredible potential for building a society where AI/Robots and humanity live in symbiosis akin to something like The Culture, but it’s just gonna be another cyber dystopia panopticon.
me
They already have. A lot of robots are already training using simulated environments, and nvidia is developing frameworks to help accelerate this. Also this is how things like alpha go were trained, with self-play, and these reinforcement learning algorithms will probably be extended for LLMs.
Also like you said there’s a lot of still untapped data in audio / video and that’s starting to be incorporated into the models.
Yeah, I’m familiar with a bunch of autonomous vehicles/drones being trained in simulated environments, but I’m also thinking stuff like VRChat.
My one quibble: that’s not the future of gig work, it’s the present
It’s been a few years since I’ve used mturk, but there were very few VR based jobs when I last used it. Has that changed?
Ah sorry, I was just being a smartass, no idea how much VR is on mturk now. To be clear I think you’ve got an accurately bleak view of the future of this stuff
Ah, no worries. Yeah, pretty grim, and I’ve not even gotten into the horror of what they’re gonna do with our biometric data. lol.
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: