This article is making an alarmist case against AI by comparing it to technologies that also didn’t fundamentally change human life. “In the same way you might use Google Maps to get everywhere and not know how to get there otherwise…”, but GPS nav didn’t ruin our ability to drive. It just makes driving more convenient.
If anything, the presence of AI generated fakes will make face-to-face interactions more valuable. Our most trusted source of news will be the people we trust and interact with directly. That’s not a new and horrifying condition for humanity. It’s how we always live.
Some people can’t wait for the butlerian jihad it seems.
“In the same way you might use Google Maps to get everywhere and not know how to get there otherwise, AI might cause people to stop learning things they would have otherwise had to learn. Ironically, though, Rosen thinks this could cause more stress as people are inundated with AI and constantly shifting gears and not seeing anything quite clearly.”
This is one of the most concerning aspects of AI IMO. Learning and thinking are some of the most fundamental aspects of being human. When you can outsource thinking to a machine, how is that going to affect your sense of self worth? How are we going to keep kids motivated to learn in school, when they know that they’ll never be able to learn things as well as AI can?
I mean you’re right, but I think there will be some time before everything becomes so integrated that we really do t have to learn anymore. Right now for example we have good translation, but learning a language is still immensely helpful when you’re trying to communicate in another country. I think there will be things like that for quite a while.
In the same way you might use Google Maps to get everywhere and not know how to get there otherwise, AI might cause people to stop learning things they would have otherwise had to learn.
The argument makes a lot of sense to me but this particular example somewhat falls flat. When I started my new job, I used Google Maps to learn the optimal route for a couple days until I had it committed to memory. Then I didn’t need it anymore.
For a place I am going to exactly once and then probably never again however, why would I want that information to take up valuable brain space? In a pre Google maps world, you spend probably twice as long taking a less optimal route that goes in that general direction, then driving around the area for a bit longer until you hopefully stumble upon what it is youre looking for.
I don’t understand why they’re trying to feed AI everything. There’s too much garbage on the internet and elsewhere. If they can’t make a good LLM with academic articles, Wikipedia, and non fiction books, newspapers, and magazines then they need to go back to the drawing board.
the white media is already a flood of disinfo, they just hate the competition
This has nothing at all to do with AI, we’re already living in a world filled with misinformation. AI doesn’t fundamentally change anything. The reality is that people come up with narratives they want to believe in, and then seek out information that fits in with those narratives.
They don’t seem to be coping so well with a non-AI-saturated ‘post truth world’.
No one is, picture the following.
You have a friend you talk to online, you’ve never met in real life but you talk online every day. Oneday that connection is severed, you never know it but a bot trained on all the data from your conversations is now talking to you instead, and the inverse has happened to your friend. You’re now both talking to bots of eachother designed to seem just like the real thing while slowly influencing you towards whatever mode of thinking is desireable by the owner of the platform.We are living in a post truth world already, just check the news.