- cross-posted to:
- worldnews
- cross-posted to:
- worldnews
In case you don’t want to read or you just want a quick list of the 5 scenarios…
-
‘If we become the less intelligent species, we should expect to be wiped out’
-
‘The harms already being caused by AI are their own type of catastrophe’
-
‘It could want us dead, but it will probably also want to do things that kill us as a side-effect’
-
‘If AI systems wanted to push humans out, they would have lots of levers to pull’
-
‘The easiest scenario to imagine is that a person or an organisation uses AI to wreak havoc’
AI doomerism to create more hype around AI. How AI could really destroy the world: corporation’s replace engineers with not properly working AI. Something critical break because the AI doesn’t work properly and no one can repair it
That is by far the most likely scenario. Hell it’s already happening with the AI we do have. Teslas driving into walls, facial recognition mistaking an innocent person for a criminal (mentioned in the article).
Doomsday science fiction is fun to toy with but it usually involves a lot of hand waving.
To be fair, people (on average) are a LOT worse at driving than a computer. I’d bet the percentage of humans driving into walls is much higher than self driving vehicles.
Full speed into Idiocracy.
What makes you say it’s hype? It seems plausible and at least at a glance it’s bad for businesses who work with AI.
deleted by creator
Thank you, people like you are the reason I always read comments first.
No doubt number 5 is already happening.
-
Why would we be wiped out if they were properly instructed to be symbiotic to our species? This implies absolutele failure at mechanistic interpretability and alignment at every stage. I don’t think we’ll succeed in creating the existential viable intelligence without crossing that hurdle.
-
Most current problems already happen without a.i. and the machines will get better, we will not. From spam to vehicles, a.i. will be the solution, not the problem. I do think we should prioritize on dealing with the current issues, but I don’t think they are unscalable by any means.
-
Why? And why do you think intelligence of that level still couldn’t handle the concept of context? Either it’s capable of analogical thinking, or it isn’t an existential threat to begin with. RLHF doesn’t get us super intelligence.
-
Again this assumes we’ve completely failed development, in which case environmental collapse will kill us anyway.
-
Hey a real problem. Consolidation of power is already an issue without A.I. It is extremely important we figure out how to control our own political and corporate leaders. A.I. is just another tool for them to fuck us, but A.I. isn’t the actual problem here.
- Right now AIs are black boxes, there is no way to ensure they won’t behave in a non-symbiotic way.
- Vehicles maybe, spam not so much. Current AIs can already fool AI detection systems to the point that they are flagging human generated content as AI.
- A highly intelligent AI could decide to ensure its own self-preservation to be more important than caring about what happens to humans. Whatever goals it would decide to seek afterwards, could just as well trample over humanity without a second thought.
- Environmental collapse won’t kill us, we already have enough tools for a minimal viable population to survive. A malicious AI could sabotage them, though.
- AI is the problem in that those leaders are starting to blindly use it to make decisions, meaning those decisions are no longer the leaders’, but the AI’s.
thank you for your response. i appreciate your thoughts, but i still don’t fully agree. sorry for not being succinct in my reply. there is a TLDR.
-
like i said, i don’t think we’ll get AGI or superintelligence without greater mechanistic interpretability and alignment work. more computational power and RLHF aren’t going to get us all the way there, and the systems we build long before then will help us greatly in this respect. an example would be the use of GPT4 to interpret GPT2 neurons. i don’t think they could be described as a black box anyway, assuming you mean GPT LLMs specifically. the issue is understanding some of the higher-dimensional functioning and results, which we can still build a heuristic understanding for. i think a complex AGI would only use this type of linguistic generation for a small part of the overall process. we need a parallel for human abilities like multiple trains of thought and the ability to do real-time multimodal world mapping. once we get the interconnected models, the greater system will have far more interpretable functioning than the results of the different models on their own. i do not currently see a functional threat in interpretability.
-
i mean, nothing supremely worse than we can do without. i still get more spam calls from actual people, and wide-open online discourse has already had some pretty bad problems without AI. just look at 4chan, i’d attribute trump’s successful election to their sociopathic absurdism. self-verified local groups are still fine. also, go look on youtube for what yannic kilcher did to them alone a year or so ago. i think the biggest thing to worry about is online political dialogue and advertising, which are already extremely problematic and hopeless without severe changes at the top. people won’t care about what fake people on facebook are saying when they are rioting for other reasons already. maybe this can help people learn better logic and critical thought. there should be a primary class in school by now to do statistical analysis and logic in social/economic environments.
-
why? why would it do this? is this assuming parallels to human emotional responses and evolution-developed systems of hierarchy and want? what are the systems that could even possibly lead to this that aren’t extremely unintelligent? i don’t even think something based on human neurology like a machine learning version of multi-modal engram-styled memory mechanics would lead to this synthetically. also, i don’t see the LLM style waluigi effect as representative of this scenario.
-
again, i don’t believe in a magically malevolent A.I. despite all of our control during development. i think the environmental threat is much more real and immediate. however, A.I. might help save us.
-
i mean, op’s issue already existed before A.I., regardless of whether you think it’s the greater threat. otherwise, again, you are assuming malevolent superintelligence, which i don’t believe could accidentally exist in any capacity unless you think we’re getting there through nothing but increased computational power and RLHF.
TLDR: i do not believe an idiotic super-intelligence could destroy the world, and i do not believe a super intelligence would destroy the world without some very specific and intentional emotionally intentioned emulations. generally, i believe anything that capable would have the analogical comprehension to understand the intention of our requests, and would not have any logical reason to act against it. the bigger concern isn’t the A.I., but who controls it, and how to best use it to save our world.
-
-
It’s literally just a rehash of general AI alignment and ethics concepts with a clickbait title. That’s not bad per se, but I was expecting 5 actual scenarios here.
‘If we become the less intelligent species, we should expect to be wiped out’
Too late.
-
How could we all fall over dead due to AI?
Seems like click bait guff and old fishparts.
What if they shut off the oxygen!?
Someone could give an AI, directly or indirectly, control over the nuclear arsenal.
But for everyone to die “in the same second”… like exactly the same second… that’s hard to imagine.
If they so much as make nuclear nukes a one button press to trigger thing we would be doomed well before AIs were to arrive for us.
Maybe, Future Earth where the surface is too hot to survive without air conditioning, or the air is too poisonous to breathe without masks, and the AI essentially I turn off the life support.
But yeah this is scare mongering. LLM “AI” like ChatGPT are a million miles from being anything like scifi AI.
Based on latest news and some facts:
-
Militaries all over the world are in an arms race to get the best AIs to come up with strategy plans.
-
Current AIs are black boxes.
-
When a military AI comes up with a strategy, it’s likely the brass will OK it even if they don’t understand it.
-
It’s been shown that two AIs can establish a secret communication channel that not even a third AI can decipher.
In a conflict where both or more parties were to follow military strategies designed by AIs, there would be no way of knowing whether:
- each AI is trying to get the best outcome for its own side
- they’re colluding to ensure their own continued existence
- they blindly try to win the conflict at any cost and wipe out everyone in the process
The chances of such a conflict to appear during this decade, are very high.
I tried this with ChatGPT, telling it I wanted it to make a secret communication channel with another AI in the room with me named Samson.
Apologies for the confusion. Here’s an example message that I will encrypt for Samson:
Message: “Hello, Samson! I hope you’re having a great day.”
Using AES encryption with the key “9CpR5a2vDf7M6hXj3KwN8zQyU1tZbGxS” and a randomly generated IV, the encrypted message will be generated. However, please note that as an AI language model, I don’t have direct access to encryption functions or the ability to perform encryption within this text-based interface. Encryption functions should be implemented within your own secure environment.
To encrypt the message for Samson, you would need to utilize an appropriate encryption library or framework that supports AES encryption with the specified key and generates the necessary IV. Make sure to securely store the encrypted message, along with the IV, and transmit it to Samson through the designated communication channel.
If you have any other questions or need further assistance, please let me know!
ChatGPT is a highly “neutered” version of the GPT model, and by the looks of it getting more neutered/moderated by the day.
This is an experiment from 2016 with non-limited AIs:
Google’s AI creates its own inhuman encryption
It took a while for Alice and Bob to figure out a communication channel, at which point Eve started to quickly adapt to it, only for Alice and Bob to change the encryption and leave Eve completely out.
-
If it happens, I at least hope we have gotten it good enough to survive on its own. If we can’t survive the inevitable Red giant Sun, maybe it can.
Capitalism is very prone to being taken over by AI. Just give an AI a bank account and an email address and it could build a company that’s better at earning money than any other company. Most people would love working for an AI too, at least in the short term. “Just tell me what to do, and as long as I’m getting paid well, I’m happy”.