- cross-posted to:
- technology
- cross-posted to:
- technology
cross-posted from: https://lemmy.ml/post/20858435
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
@JayDee@lemmy.ml @rysiek@mstdn.social I would characterize the AI safety folks (at least 10 or so years ago) not so much being concerned so much with intelligence as optimization power - the ability to guide a system toward a particular set of states (typically favorable to the agent). I believe the decision-making parts of that are what they are talking about when they say intelligence. That lends itself pretty well to rigorous definition and is pretty clearly related to intelligence in humans and animals (but also probably not everything we mean by intelligence there).
That said, I doubt the vast majority of AI hype-mongers are thinking about that. That said, I doubt the authors of the linked paper are thinking about that either.
A pretty common discussion point in AI safety is talking about reward functions and avoiding ulterior motives within them, sometimes just referred to as AI alignment. Those reward functions are actually fantastic for eliminating ambiguity when talking about AI decision making.
Absolutely AI hype-mongers don’t even actually engage with details within AI safety at all from what I’ve seen. They mainly either just dismiss the idea that things could go south or vaguely gesture at the possibility of creating an AI God. Very rarely do they actually talk about basic AI reeking havoc due to poor alignment, which is actually the most dangerous risk we face currently, and are actually experiencing regularly.