Generally, it seems like AI experts are divided about how close we are to developing an AGI, and how close any of this might take us to an extinction level event. On the whole, they seem less likely to think that AI will kill us all. Maybe.

  • SSUPII@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I’ll be the most sincere I can be.

    As a kid I have very often dreamed to have a robot friend, between the people I already knew. And not just to talk, but it being a fully functional human in capabilities. I dreamed of robots just walking around acting and talking like everyone with everyone, both other robots and humans, like there is not a single difference.

    And now, despite being a grown adult knowing more, I can’t help but actually being extremely positive of the steps AI is taking.

    From your exact article:

    “Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.” -Nick Bostrom

    I simply can’t see how that’s a bad thing. My inner child would be so happy! But now lets put aside me remembering nice times.

    I believe we are closer to an AGI than ever before, but it will in no way make disaster. In fact, it will instead improve our lives drastically. What gain would bring to build something to willingly cause harm to self and others? Also there is no way no regulations will be made, and it will reduce the chances of a major fuck-up ever more.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      As a kid I have very often dreamed to have a robot friend

      Yes, I can relate.

      “Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.” -Nick Bostrom

      I simply can’t see how that’s a bad thing.

      It can be if their goals are not aligned with ours. We’re essentially creating an alien species. We try to make it align well, but that’s a very difficult problem to solve. We have not found a solution yet and we don’t know wether it is possible. https://en.wikipedia.org/wiki/AI_alignment#Existential_risk

      it will in no way make disaster.

      No one knows the future. Please just note that many experts disagree. Others agree.

      it will instead improve our lives drastically.

      Yes, if we solve the alignment problem and control problem.

      What gain would bring to build something to willingly cause harm to self and others?

      The current economic incentives reward those who create the next powerful AI the fastest. Making it safe costs money and time, so there is an incentive to risk it. Current practice is to release things without fully understanding the implications. Sometimes, emergent capabilities are discovered weeks and months after their release. This is somewhat fine as long as the models are somewhat harmless. It could easily spell disaster once we cross a line which we might not clearly see before we cross it.