So finally got around to watching a recent movie that I won’t name since I am not sure if it was part of the marketing, but the premise was that there was an all powerful AI that was going to take over the world and it used a mixture of predictive reasoning, control of technology, and limited human agents who were given a heads up on what was coming.

It was… mostly disappointing and felt like a much tamer version of Linda Nagata’s The Red (apologies as that is TECHNICALLY a spoiler, but the twist is revealed like a hundred pages into the first book that came out a decade ago). And an even weaker version still of Person of Interest.

Because if we are in the world where an AI has access to every camera on the planet and can hack communications in real time and so forth: We aren’t going to have vague predictions of what someone might do. We are going to have Finch and Root at full power literally dodging bullets (and now I am sad again) and basically being untouchable. Or the soldiers of The Red who largely have what amounts to x-ray vision so long as they trust their AI overlord and shoot where told and so forth.

Or just the reality of how existential threats can be both detected and manufactured as the situation calls for utilizing existing resources/Nations.

Any suggestions for near future (although, I wouldn’t be opposed to a far future space opera take on this) stories that explore this? I don’t necessarily need a Frankenstein Complex “we must stop it because it is a form of life that is not us”, but I would definitely prefer an understanding of just how incredibly plausible this all is (again, I cannot gush enough about Linda Nagata’s The Red). Rather than vague hand waving to demonstrate the unique power of the human soul

spoiler

Or the large number of thetans within it

  • Mechanismatic
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there’s often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?

    I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn’t consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.