- cross-posted to:
- technology
- cross-posted to:
- technology
cross-posted from: https://lemmy.ml/post/20858435
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
A pretty common discussion point in AI safety is talking about reward functions and avoiding ulterior motives within them, sometimes just referred to as AI alignment. Those reward functions are actually fantastic for eliminating ambiguity when talking about AI decision making.
Absolutely AI hype-mongers don’t even actually engage with details within AI safety at all from what I’ve seen. They mainly either just dismiss the idea that things could go south or vaguely gesture at the possibility of creating an AI God. Very rarely do they actually talk about basic AI reeking havoc due to poor alignment, which is actually the most dangerous risk we face currently, and are actually experiencing regularly.