- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
There is a discussion on Hacker News, but feel free to comment here as well.
There is a discussion on Hacker News, but feel free to comment here as well.
Archive link so you can skip the paywall.
AI safety can be interpreted two ways:
Due to context “AI safety” likely means #1. Seriously, for how long are they (OpenAI employees and the media circling OpenAI) going to roll on wishful belief (like a pig rolls on mud), as if OpenAI actually developed artificial intelligence? Show Q* then we talk.
If anything, the recent events made the risk of AI killing everyone even less probable. Because guess what: if you put profit before tech development, tech development slows down. The likelihood that OpenAI will develop an intelligent system actually decreased.
In my opinion, it’s far more likely for people to use AI as a weapon to kill people than for AI to “go rogue” and destroy humanity.
While humans are doing a fairly good job on their own of people psychopathic freaks, imagine a world where police robots are laying siege to neighborhoods, where corporations use AI to maximize efficiency without regard for human suffering.
The real danger of AI is the lack of liability. If a cop kills an innocent person, you can put him on trial. If a robot kills an innocent person, this will get written off as the unfortunate collateral of technological progress (and maybe the department will have to pay the family a fine, a fine that is just coming out of tax dollars anyways).
Yup. However, good news - people might use AI in the future for that, but the scummy tactic itself is not new, so we [humans and our societies] got already a bunch of mechanisms against it. We’re pretty good to find someone to blame when this sort of thing happens, and AI won’t change it.