“Evil” can mean a pretty broad array of things, though. There’s a lot of actions it could take that at least some people would call evil, even if causing distress or breaking deontological rules isn’t the end goal.
The way I see it there’s 3 possible AGIs: a paperclip optimiser, an AI that obeys somebody, and a somewhat-benevolent AI. The second one is the worst, that’s where the exterminism you mentioned is pretty inevitable (although the elites might keep a few people as sex slaves or some such fucked-up thing). Then comes the paperclip optimiser, which doesn’t worry about the bullshit that drives human atrocities but doesn’t have a very inspiring actual goal, and then the attempt at benevolence. I suspect the set of ethical theories most people always agree with is the empty set, but a utilitarian AI would be much preferable to the other two even if it does forced organ donation sometimes.
People talk about an AI that obeys everyone somehow, but if you think about it for a moment that doesn’t really make sense. We can barely vote on a single dollar figure for something successfully.
“Machines aren’t capable of evil. Humans make them that way.”
AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.
However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.
“Evil” can mean a pretty broad array of things, though. There’s a lot of actions it could take that at least some people would call evil, even if causing distress or breaking deontological rules isn’t the end goal.
The way I see it there’s 3 possible AGIs: a paperclip optimiser, an AI that obeys somebody, and a somewhat-benevolent AI. The second one is the worst, that’s where the exterminism you mentioned is pretty inevitable (although the elites might keep a few people as sex slaves or some such fucked-up thing). Then comes the paperclip optimiser, which doesn’t worry about the bullshit that drives human atrocities but doesn’t have a very inspiring actual goal, and then the attempt at benevolence. I suspect the set of ethical theories most people always agree with is the empty set, but a utilitarian AI would be much preferable to the other two even if it does forced organ donation sometimes.
People talk about an AI that obeys everyone somehow, but if you think about it for a moment that doesn’t really make sense. We can barely vote on a single dollar figure for something successfully.
I agree, but only for existing technologies.
AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.
However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.
Bad according to who? Like, I’ve heard people claim that intelligence correlates with goals before, but not everyone agrees and saying it’s definitional is way way too strong. The first result a search turns up for me directly calls it an AGI.