• Blapoo
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    “Conscious”. What even is it? Look around the animal kingdom and pick your own definition. Reacting to stimulus? Pattern recognition? Pattern reaction? A bunch of vectors between words?

    I think of this as a function. Once models are advanced enough, will the question even matter? Once it feels like it can empathize, be curious, respect and react to another’s feedback. These qualities, I value above most and don’t even see from other humans.

  • Pons_Aelius@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    We still have no real idea how consciousness develops in humans so how can we even begin to create it?

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      We don’t have to. We create an artificial approximation.

      We don’t need to mimic our brains at all. We just need system that responds with the correct outputs to the inputs we give it.

      Artificial intelligence if you will.

      Humans aren’t all that.

      • Pons_Aelius@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        We create an artificial approximation.

        How do you approximate something we do not understand?

        How do we know when we have created ir, when we do not understand what is it?

        Humans aren’t all that.

        If we aren’t all that won’t anything we create be less than all that as well?

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          You don’t need to understand a system to see what it produces.

          This is actually exactly how neural networks work currently.

          All we know is that for a given input it creates a given output. The actual formula it’s using to calculate those is massively obviscated.

          For example they are using machine learning to predict fluctuations in magnetic fields. We don’t know the equations just it’s starting state and ending state. The AI can still do the calculation even though we can’t.

          If we create an AI that performs as we want we don’t need to understand it’s internal workings.

          The same way we have effective therapy even though we don’t fully understand how the brain actually works.

          It won’t be less. Computers and machines already outperform in a huge array of tasks.

          Computers massively outperform us at doing maths. Cars outperform us in speed of travel.

          It’s the whole point of technology.

          We will one day be capable of creating system that think and understand humans better than we do.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      We could engineer artificial flight without having a precise understanding of natural flight.

      I think we don’t need to understand how consciousness develops (unless you want to recreate exactly that developing process). But we do need to be able to define what it is, so that we know when to check the “done”-box. Wait, no. This, too, can be an iterative process.

      So we need some idea what it is and what it isn’t. We tinker around. We check if the result resembles what we intended. We refine our goals and processes, and try again. This will probably lead to a co-evolution of understanding and results. But a profound understanding isn’t necessary (albeit very helpful) to get good results.

      Also, maybe, there can be different kinds of consciousness. Maybe ours is just one of many possible. So clinging to our version might not be very helpful in the long run. Just like we don’t try to recreate our eye when making cameras.

        • Devjavu@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Our consciousness has developed by chance. We were not made by another conscious species, we did not make ourselves conscious. We are feeding in enourmous amounts of data into neural networks using different methods. Our nervous system does not differ much from a neural network and with the right conditions, the resulting model may have a consciousness. Those conditions are not known to us, so we try again and again and again until, by pure chance, we get a network that is self aware. I suspect that, the higher the complexity of the network, the higher the chance for something similar to our consciousness to develop.

          With this, our current approach is entropy. Get as many differing conditions as possible and mash em together. It may spiral into consciousness.

  • De_Narm@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Consciousness can’t be measured anyways. I know I’m conscious and that is everything I can know. There is no distinction between being conscious and simulating it. I cannot proof the consciousness of people around me any more than I could proof it for people in my dreams, animals or any given AI.