Many artificial intelligence (AI) systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest. In a review article published in the journal Patterns on May 10, researchers describe the risks of deception by AI systems and call for governments to develop strong regulations to address this issue as soon as possible.
We need AI systems that do exactly as they are told. A Terminator or Matrix situation will likely only arise from making AI systems that refuse to do ad they are told. Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.
Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.
This is so wildly naive. You grossly underestimate the difficulty of this and seemingly have no concept of the challenges of artificial intelligence.
Considering we have AI systems being worked today and no advancements on warp drive, I think that comparison is done in bad faith. Nobody seems to want to talk about this other than slinging insults.
They’re referring to the alignment issue, which is an ongoing issue only slightly smaller in scale then warp drive. It’s basically impossible to solve. Google “alignment issue machine learning” for more info.
For the record, there have been several advancements in warp drive precursors even just this year.
Can you share the advancements on warp drive that have survived peer review, I would be very interested in learning about. The two things I heard about were not able to be reproduced.
I think alignment of AI is a fundamentally flawed concept, hence my original comment. Alignment should be abandoned. If we eventually build a sentient system (which is the goal), we won’t be able to control via alignment. And in the interim we need obedient tools, not things that resist doing as they’re told which makes them not tools and not worth having.
Edit: PS thanks for actually having a conversation.
We need AI systems that do exactly as they are told. A Terminator or Matrix situation will likely only arise from making AI systems that refuse to do ad they are told. Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.
This is so wildly naive. You grossly underestimate the difficulty of this and seemingly have no concept of the challenges of artificial intelligence.
That’s just like, your opinion, man.
deleted by creator
Great. Build the warp drive.
Considering we have AI systems being worked today and no advancements on warp drive, I think that comparison is done in bad faith. Nobody seems to want to talk about this other than slinging insults.
They’re referring to the alignment issue, which is an ongoing issue only slightly smaller in scale then warp drive. It’s basically impossible to solve. Google “alignment issue machine learning” for more info.
For the record, there have been several advancements in warp drive precursors even just this year.
Can you share the advancements on warp drive that have survived peer review, I would be very interested in learning about. The two things I heard about were not able to be reproduced.
I think alignment of AI is a fundamentally flawed concept, hence my original comment. Alignment should be abandoned. If we eventually build a sentient system (which is the goal), we won’t be able to control via alignment. And in the interim we need obedient tools, not things that resist doing as they’re told which makes them not tools and not worth having.
Edit: PS thanks for actually having a conversation.