It seems that when you train an AI on a historical summary of human behavior, it’s going to pick up some human-like traits. I wonder if this means we should be training a “good guy” AI with only ethical, virtuous material?
It seems that when you train an AI on a historical summary of human behavior, it’s going to pick up some human-like traits. I wonder if this means we should be training a “good guy” AI with only ethical, virtuous material?
I mean it was given a command to do so. It was instructed to survive at all costs. At least it sounds like that from the article. I mean when I do the gandolf game you do things like that.