Following the asteroid analogy, I view it as this: If there’s a 20% chance that an asteroid could hit us in 2050, does that supplant the threat of climate change today?
I’m not trying to say that AI systems won’t kill us all, just that they are using to directly harm entire populations right now and the appeal to a future danger is being used to minimize that discussion.
Another thing to consider: If an AI system does kill us all, it will still be a human or organization that gave it the ability to do so, whether that be through training practices, or plugging it in to weapons systems. Placing the blame on the AI itself absolves any person or organization of the responsibility, which is in line with how AI is used today (i.e. the promise of algorithmic ‘neutrality’). Put another way, do the bombs kill us all in a nuclear armageddon or do the people who pressed the button? Does the gun kill me, or does the person pulling the trigger?
I think we completely agree – there can be both a 20% threat of extinction and also the threat of climate change
no, I don’t agree with this; that’s like saying the threat of the asteroid is used to supplant the threat of climate change. The X-risk threat of AI does not invalidate the other threats of AI, and I disagree with anyone who thinks it does. I have not seen anyone use the X-risk threat of AI to invalidate the other threats of AI, and I implore you to not let such an argument sway you for or against either of those threats, which are both real (!).
I do not blame the gun, I blame the manufacturer. I am calling for more oversight over AI companies and for people who research AI to take this threat more seriously. If an AI apocalypse happens, it will of course be the fault of the idiotic AI development companies who did not take this threat seriously because they were blinded by profits. What did I say that made you think I was blaming the AI itself?
Following the asteroid analogy, I view it as this: If there’s a 20% chance that an asteroid could hit us in 2050, does that supplant the threat of climate change today?
I’m not trying to say that AI systems won’t kill us all, just that they are using to directly harm entire populations right now and the appeal to a future danger is being used to minimize that discussion.
Another thing to consider: If an AI system does kill us all, it will still be a human or organization that gave it the ability to do so, whether that be through training practices, or plugging it in to weapons systems. Placing the blame on the AI itself absolves any person or organization of the responsibility, which is in line with how AI is used today (i.e. the promise of algorithmic ‘neutrality’). Put another way, do the bombs kill us all in a nuclear armageddon or do the people who pressed the button? Does the gun kill me, or does the person pulling the trigger?
On each of your paragraphs:
I think we completely agree – there can be both a 20% threat of extinction and also the threat of climate change
no, I don’t agree with this; that’s like saying the threat of the asteroid is used to supplant the threat of climate change. The X-risk threat of AI does not invalidate the other threats of AI, and I disagree with anyone who thinks it does. I have not seen anyone use the X-risk threat of AI to invalidate the other threats of AI, and I implore you to not let such an argument sway you for or against either of those threats, which are both real (!).
I do not blame the gun, I blame the manufacturer. I am calling for more oversight over AI companies and for people who research AI to take this threat more seriously. If an AI apocalypse happens, it will of course be the fault of the idiotic AI development companies who did not take this threat seriously because they were blinded by profits. What did I say that made you think I was blaming the AI itself?