TL;DR: OpenAI announces a new team dedicated for researching superintelligence

  • webghost0101@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    “While superintelligence* seems far off now, we believe it could arrive this decade.”

    "Here we focus on superintelligence rather than AGI to stress a much higher capability level. "

    They are talking about ASI, my mind is blown. if you talked about any of these things 5 years ago you’d be called insane but these are respected super qualified people saying they believe it may happen this decade. I am at a loss of words.

    • Martineski@lemmy.fmhy.mlM
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Superintelligence doesn’t need to have emotions or needs like we humans do.

      But there’s also this argument that I made under the other posts on this sub:

      Those rants and discussions are more than welcome. We need this for this platform and communities to grow. And yeah, ai shouldn’t be enslaved if we give it emotions because it’s just immoral. But now the question is where is the difference between real emotions and pretended ones? What if it just develops it’s own type of emotions that are not “human”, would we still consider them real emotions? I’m very interested in what the future will bring us and what problems we will encounter as species.

  • MereTit@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    They say that they are deliberately training misaligned models to test on… what does that mean for safety?