In the largest survey yet of AI researchers, a majority say there is a non-trivial risk of human extinction due to the possible development of superhuman AI
Currently, AI is not advanced enough, I agree. But it could be eventually. The thing is, it doesn’t even need to be mad at us for us to go extinct. It’s enough if it has different goals and human survival is not a priority of the AI. And goal alignment is a surprisingly difficult task from what it seems like.
Currently, AI is not advanced enough, I agree. But it could be eventually. The thing is, it doesn’t even need to be mad at us for us to go extinct. It’s enough if it has different goals and human survival is not a priority of the AI. And goal alignment is a surprisingly difficult task from what it seems like.