I don’t really think so. I am not really sure why, but my gut feeling is that being good at impersonating a human being in text conversation doesn’t mean you’re closer to creating a real AI.

  • ☆ Yσɠƚԋσʂ ☆
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    I actually think that analog computers would be more relevant here than quantum computers. The brain is an analog computer, and you could replicate what our neural connections do using a different substrate. This is an active area of research currently.

    • Sr Estegosaurio
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Yeah, but replicating the brain sure it’s an incredibly complicated task. It would be pretty interesting.

      • ☆ Yσɠƚԋσʂ ☆
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Yeah, replicating the entire brain is a herculean effort. However, it’s also important to keep in mind that the brain evolved for robustness, and has a lot of redundancy in it. It turns out that we’d only need to implement a neural network that’s roughly 10% of the brain to get a human level intelligence as this case illustrates. That seems like a much more tractable problem. It might be even smaller in practice, since a lot of the brain is devoted to body regulation and we’d only care about the parts responsible for thought and reasoning.

        I think the biggest roadblock is in figuring out the algorithm behind our conscious process. If we can identify that from the brain structure, then we could attempt to implement it on a different substrate. On Intelligence is a good book discussing this idea.

    • Zerush
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 years ago

      It isn’t, the brain don’t work analog, the brain process several data at the same time, in analog manner it can’t create concienciouness, this is a quantum process. Because of this, a a digital or analog computer never can have a real intelligence with its own consciousness, but possible in a quantum computer in no so far future. https://www.psychologytoday.com/us/blog/biocentrism/202108/quantum-effects-in-the-brain https://www.nature.com/articles/440611a

      • ☆ Yσɠƚԋσʂ ☆
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        Neurons are very much analog as they do not send discrete signals to each other. While the brain exploits quantum effects there is no indication that these are fundamental to the function of consciousness.

        • Zerush
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 years ago

          You can create an artificial intelligence by digital or analog means, capable of solving problems through neural networks, but you cannot create a self-aware intelligence by these means. It is the consciousness that is only possible with quantum technologies. What we currently think of as artificial intelligence, sophisticated as it may be, has real intelligence no higher than that of a grasshopper. although it can learn data processing and acquisition pathways, but this is only a small part of how our brain works. It’s like someone with a photographic memory, who can recite whole chapters of books to a question, but without really understanding it. The Turing test currently has many deficiencies that do not serve for a clear definition of human or machine, since it dates back to the 1950s and this did not foresee the computer evolution from the outset. For this reason, several other conversational methods are used today with possible responses that are not clearly definable, like the Markus test, the Lovelace test 2.0 or MIST (Minimum Intelligence Signal Test) Still a long way to get a Daneel Olivaw

          I asked Andi if he would pass the Turing test.

          (I love this “search engine”)

          • Ferk
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            2 years ago

            No modern AI has been able to reliably pass the Turing test without blatant cheats (like allowing the use of foreign kids unable to understand/express/speak themselves fluently, instead of adults). Just because it dates back to the 1950s doesn’t make it any less valid, imho.

            I was interested by the other tests you shared, thanks for that! However, in my opinion:

            The Markus test is just a Turing Test with a video feed. I don’t think this necessarily makes the test better, it adds more requirements for the AI, but it’s unclear if those are actually necessary requirements for consciousness.

            The Lovelace test 2.0 is also not very different from a Turing test where the tester is the developer and the questions/answers are on a specific domain, where it’s creativity is what’s tested. I don’t think this improves much over the original test either, since already in the Turing test you have freedom to ask questions that might already require innovative answers. Given the more restricted scope of this test and how modern procedural generation and neural nets have developed, it’s likely easier to pass the Lovelance test than the Turing test. And at the same time, it’s also easier for a real human to not pass it if they can’t be creative enough. I don’t think this test is really testing the same thing.

            The MIST is another particular case of a more restricted Turing test. It’s essentially a standardized and “simplified” Turing test where the tester is always the same and asks the same questions out of a set of ~80k. The only advantage is that it’s easier to measure and more consistent since you don’t depend on how good the tester is at choosing their questions or judging the answers, but it’s also easier to cheat, since it would be trivial to make a program specifically designed to answer correctly that set of questions.

            • Zerush
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              The difficulty in these tests is that even we ourselves are still not clear about what consciousness. the ego, is and how it really works, so this type of tests to an AI will always remain in a subjective result, very possible even that some people do not pass the Turing test.

          • ☆ Yσɠƚԋσʂ ☆
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            First, we don’t have a firm definition for what consciousness is or how to measure it. However, self awareness is simply an act of the system modelling itself as part of its internal simulation of the world. It’s quite clear that this has nothing to do with quantum technologies. In fact, Turing completeness means that any computation done by a quantum system can be expressed by a classical computation system.

            The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

            What we need to do to make systems that think like us is to evolve them in an environment that mimics how our physical world works. Once these systems build up an internal representation of their environment we can start building a common language to talk about it.

            • vitaminka
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

              heh, in your perception, how do human brains work? 😅

              • ☆ Yσɠƚԋσʂ ☆
                link
                fedilink
                arrow-up
                2
                ·
                2 years ago

                In my perception, human brains are neural networks that evolved to create a simulation of the physical environment that the organism inhabits. Human brains are a result of tuning over billions of years of natural selection.

                Operating on an internal representation of the world is inherently cheaper than parsing out the data from the senses. This approach also allows the brain to create simulations of events that happened in the past or may happen in the future allowing for learning and planning. There’s a lot more that can be said about this, but I think these are the key features that make complex brains valuable from natural selection perspective. I generally agree with the ideas outlined in this book.

                • vitaminka
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 years ago

                  i see 🤔

                  i’m still not totally convinced that there’s a fundamental division/difference between the criteria that constitute a brain/consciousness (many of them you mentioned) and just artifacts of learning algorithms at a scale we can’t model/execute on a computer

                  • ☆ Yσɠƚԋσʂ ☆
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 years ago

                    I don’t think there’s anything magical about consciousness that can’t be modelled and executed on a computer. I’m just saying that current approaches to AI are inherently limited because they’re not based on symbolic logic.

                    A neural network gets tuned based on some input data, and it has no understanding what that data represents. It’s just a bunch of numbers without any context. All it can do is to say that a particular numeric input matches one of the inputs its been trained on previously within a certain confidence interval.

                    On the other hand, the neural network in the brain evolved to represent the physical environment, and that’s the shared context we have when we interact with one another. Our language relies on a lot of shared context based on this.

                    And I think that in order to make AI that has human style intelligence we have to train it within the context of a physical environment that it learns to interact with to create this share context that we can relate to.