• Lvxferre
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Archive link so you can skip the paywall.

    AI safety can be interpreted two ways:

    1. Preventing AI from killing everyone. In this sense, the safeguards are imagined, but so are the current risks.
    2. Preventing AI from saying stupid shit. The risk is real, but it is not as bad as it looks like; we got confidently incorrect people since the dawn of time, and yet we’re still here.

    Due to context “AI safety” likely means #1. Seriously, for how long are they (OpenAI employees and the media circling OpenAI) going to roll on wishful belief (like a pig rolls on mud), as if OpenAI actually developed artificial intelligence? Show Q* then we talk.

    If anything, the recent events made the risk of AI killing everyone even less probable. Because guess what: if you put profit before tech development, tech development slows down. The likelihood that OpenAI will develop an intelligent system actually decreased.

    • Drewfro66@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      In my opinion, it’s far more likely for people to use AI as a weapon to kill people than for AI to “go rogue” and destroy humanity.

      While humans are doing a fairly good job on their own of people psychopathic freaks, imagine a world where police robots are laying siege to neighborhoods, where corporations use AI to maximize efficiency without regard for human suffering.

      The real danger of AI is the lack of liability. If a cop kills an innocent person, you can put him on trial. If a robot kills an innocent person, this will get written off as the unfortunate collateral of technological progress (and maybe the department will have to pay the family a fine, a fine that is just coming out of tax dollars anyways).

      • Lvxferre
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        The real danger of AI is the lack of liability.

        Yup. However, good news - people might use AI in the future for that, but the scummy tactic itself is not new, so we [humans and our societies] got already a bunch of mechanisms against it. We’re pretty good to find someone to blame when this sort of thing happens, and AI won’t change it.