AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.
However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.
AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.
However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.
Bad according to who? Like, I’ve heard people claim that intelligence correlates with goals before, but not everyone agrees and saying it’s definitional is way way too strong. The first result a search turns up for me directly calls it an AGI.