• 11 Posts
  • 650 Comments
Joined 1 year ago
cake
Cake day: March 2nd, 2024

help-circle





  • On each of your paragraphs:

    1. I think we completely agree – there can be both a 20% threat of extinction and also the threat of climate change

    2. no, I don’t agree with this; that’s like saying the threat of the asteroid is used to supplant the threat of climate change. The X-risk threat of AI does not invalidate the other threats of AI, and I disagree with anyone who thinks it does. I have not seen anyone use the X-risk threat of AI to invalidate the other threats of AI, and I implore you to not let such an argument sway you for or against either of those threats, which are both real (!).

    3. I do not blame the gun, I blame the manufacturer. I am calling for more oversight over AI companies and for people who research AI to take this threat more seriously. If an AI apocalypse happens, it will of course be the fault of the idiotic AI development companies who did not take this threat seriously because they were blinded by profits. What did I say that made you think I was blaming the AI itself?











  • LLMs in their current form are dangerous to society, but not existential threats. To see the danger of human extinction, multiply some probabilities together:

    At some point, AIs will likely become super-intelligent, meaning they are smarter than the entire human race put together. Let’s say there’s a 50% chance of this happening in the next 30 years. Something this smart would indeed be capable of doing something very clever and killing all humans (if it wanted to); there are various theories about how it might go about doing that, and if this is the part that sounds outlandish to you then I can elaborate. Needless to say, if something is extraordinarily smarter than you, it can figure out a way to kill you that you didn’t think of, even if they’re sandboxed in a high-security prison. (Mind you, it will probably be exposed like ChatGPT to the general public, not very likely to be sandboxed if you ask me.)

    Okay but surely nobody would build an AI that would want to kill us all right? This is the “alignment problem” – we currently don’t know how to make sure an AI has the same “goals” that its creators want it to have. There’s that meme – an AI tasked with optimizing a silverware production process might end up turning the whole universe and everyone in it into spoons. Because almost nobody is actually taking this problem seriously, I think the first superintelligent AI has an 80% chance to be unaligned.

    Would an unaligned AI want to kill us? It might be unaligned but conveniently still value human life. Let’s say be generous and say it’s a 50% chance that the unaligned AI works out in our favour. (It’s probably less likely.)

    So that’s a 20% chance in the next 30 years that someone will create an AI which is clever enough to kill us all, and (by unhappy accident) wants to kill us all. You can put your own numbers on all of these things and get your own probability. If you put very low numbers (like 0.1%) on any of these steps, you should ask yourself if in 2010 you thought it was likely that AI would be where it is today.

    Edit: yes I know it sounds absurd and like fantasy, but a lot of real things sound absurd at first. One of the most pervasive arguments against global warming is that it sounds absurd. So if you’re going to disagree with this, please at least have a better reason than “it sounds absurd.”





  • I was taught when I was young that you don’t call something that isn’t really genocide genocide, someone that isn’t a nazi a nazi, etc. because it dilutes the meaning of the word. I do think that Israel is committing a genocide, and I do think that Musk is a neo-nazi; I’m not diluting these terms to say this.