Transcription of a talk given by Cory Doctrow in 2011

  • argv_minus_one@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Funny enough, a lot of the nerds out there like me are actually begging to lock down tech now, because we’re nervous about what motives a seemingly inevitable GAI is going to have. I still maintain it wouldn’t work, because there’s no such a thing as a trusted authority, not long-term anyway. Maybe there’s a benefit to locking advancement down temporarily, but that’s it.

    All that’ll do is make sure that some other country—probably a hostile one—makes AGI before yours does.

    Anyway, I’m not overly worried about the motives of AGI itself. I’m more worried about what its owners will use it for, namely to replace human labor and exterminate everyone who isn’t a billionaire.

    “Machines aren’t capable of evil. Humans make them that way.”

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      “Evil” can mean a pretty broad array of things, though. There’s a lot of actions it could take that at least some people would call evil, even if causing distress or breaking deontological rules isn’t the end goal.

      The way I see it there’s 3 possible AGIs: a paperclip optimiser, an AI that obeys somebody, and a somewhat-benevolent AI. The second one is the worst, that’s where the exterminism you mentioned is pretty inevitable (although the elites might keep a few people as sex slaves or some such fucked-up thing). Then comes the paperclip optimiser, which doesn’t worry about the bullshit that drives human atrocities but doesn’t have a very inspiring actual goal, and then the attempt at benevolence. I suspect the set of ethical theories most people always agree with is the empty set, but a utilitarian AI would be much preferable to the other two even if it does forced organ donation sometimes.

      People talk about an AI that obeys everyone somehow, but if you think about it for a moment that doesn’t really make sense. We can barely vote on a single dollar figure for something successfully.

      “Machines aren’t capable of evil. Humans make them that way.”

      I agree, but only for existing technologies.

      • argv_minus_one@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.

        However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.