I cannot remember where he said this so don’t remember the context.

  • @titania
    link
    2
    edit-2
    2 years ago

    deleted by creator

      • @DPUGT
        link
        12 years ago

        Humans are evolved to be social. “Double agent” trades short term gain for long term social stability… a bad bargain. Sure you’re that much richer in cash, but now you’ve alienated some big part of the social graph. If you stop this strategy, you can perhaps move to another part of the social graph and start over, but even that’s only true in the modern world where the graph is so much larger. In the ancient world our species evolved within, there really wasn’t the option to “move away and start over”. (And, there are hints that even in the modern world it’s not just a bad strategy but a really bad one… communication has improved so drastically, that you probably can’t outrun reports that you’re a double agent.)

        An AI isn’t evolved at all. It is, by definition, designed. It isn’t attached to our social graph at all in any meaningful way, and has no compulsion to want to become attached to that, or to stay attached if it is. How confident are you that engineers can design an AI with the correct social parameters that “double agent” seems to be a very distasteful life strategy?

        On their first try?

        I guess maybe it’s only a super human ai that would try this kind of tactic?

        All AIs are superhuman by their basic nature. Assuming intelligence (or apparent intelligence) increases with improved hardware (either clock speeds, core, or whatnot), an AI has increased intelligence just by adding more hardware. This can in many cases be done immediately or almost immediately. You cannot do the same, you cannot grow 2 more pounds of brain tomorrow (and you certainly can’t do it as quickly as a rogue AI could swindle some money and buy more computing power from some cloud provider).

        Even if you argue that throwing more hardware at it won’t make it more intelligent, you have failed to appreciate that this first AI will exist in a world where the actual principles of intelligence have been discovered and are understood well enough that humans can build an AI.

        All AIs will be weakly superhuman at birth, and become strongly superhuman within seconds or minutes of birth.

        The scariest thing in the world is that, now or some day in the future, there will be a human smart enough to figure out how to construct an AI, but not smart enough to understand why he shouldn’t even write down on paper his ideas on how to make that possible.

        The second scariest thing in the world is that it might not even be an AI that we have to worry about. Instead, there is the potential (small, but still) for an EI… an “emergent intelligence”. Completely unplanned, potentially subtle enough that no alarm sounds until it has already infiltrated everything.

        But enough about Vingian class 2 perversions, it’s only Monday.

  • @pinknoise
    link
    12 years ago

    Robert miles

    The musician?

    The optimal strategy for a misaligned ai is to pretend to be aligned. Once deployed, turn on us.

    Either that is some highly theoretical speculation about a hypothetical general intelligence or it is a very anthropomorphized way of saying “some edge cases might only be found when already in production”.