• Endward23@futurology.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

    Sounds like something I would expect from an evolved system. If deception is the best way to win, it is not irrational for a system to choice this as a strategy.

    In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.

    Interesting. Can somebody tell me which case it is?

    As far as I understand, Park et al. did some kind of metastudy as a overview of literatur.