• psvrh@lemmy.ca
    link
    fedilink
    arrow-up
    38
    arrow-down
    11
    ·
    8 months ago

    This gets into a tricky area of “what is consciousness, anyway?”. Our own consciousness is really just a gestalt rationalization engine that runs on a squishy neural net, which could be argued to be “faking it” so well that we think we’re conscious.

    • Omega_Haxors
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      9
      ·
      edit-2
      8 months ago

      Oh no we are NOT doing this shit again. It’s literally autocomplete brought to its logical conclusion, don’t bring your stupid sophistry into this.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 months ago

        Lol the dude is wrong but you aren’t the boss of what "we"are doing.

      • trebuchet
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        8 months ago

        If anyone is using empty sophistry around here I’d say it’s you.

        What purpose does your dismissive analogy serve? It displays only shallow insight on the actual topic at hand. Just because something very sophisticated can be called the logical conclusion of something simple does not in any way take away from the value of the more sophisticated.

        Let’s look at: The Internet is literally a LAN brought to its logical conclusion, don’t bring your stupid sophistry into this. It’s completely shallow and fails to appreciate all of the very significant differences in scale and development. It only serves as words that sound good to a listener on first impression but completely fall apart under actual consideration - i.e sophistry.

        • Omega_Haxors
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          8 months ago

          Fastest block in my life. Didn’t even read past the first line.

          • JackGreenEarth@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            8 months ago

            That’s just demonstrates that you are the one that won’t listen to opposing views.

      • UraniumBlazer@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        8 months ago

        Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            I looked up what GPAI was (apparently it’s the “Global Partnership on AI”). However, what’s GPEI? The only thing I’m getting is the “Global Polio Eradication Initiative”.

            I didn’t know about any of them till you mentioned them. I dunno abt GPAI, but I sure as hell support GPEI? Who wouldn’t want to irradiate Polio?

            • monk@lemmy.unboiled.info
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              It was a typo, sorry. I meant General Purpose Artificial Intelligence / General Purpose Natural Intelligence.

        • Omega_Haxors
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          6
          ·
          edit-2
          8 months ago

          I’m starting to wonder if any of you even know how that shit even works internally, or if you just take what the hype media says at face value. It literally has one purpose and one purpose alone: Determine what the next word is going to be by calculating the probability which word will come after the next. That’s it. All it does is try to string a convincing sentence using probabilities. It does not and cannot understand context.

          The underlying tech is really cool but a lot of people are grotesquely overselling its capabilities. Not to say a neural network can’t eventually obtain consciousness (because ultimately our brains are a union of a bunch of little neural networks working together for a common goal) but it sure as hell isn’t going to be an LLM. That’s what I meant by sophistry, they’re not engaging with the facts, just some nebulous ideal.

          • alphafalcon@feddit.de
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            8 months ago

            I’m with you on LLMs being over hyped although that’s already dying down a bit. But regarding your claim that LLMs cannot “understand context”, I’ve recently read an article that shows that LLMs can have an internal world model:

            https://thegradient.pub/othello/

            Depending on your definition of “understanding” that seems to be an indicator of being more than a pure “stochastic parrot”

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            8 months ago

            “Intelligence” - The attribute that makes a system propose and modify algorithms autonomously to achieve a certain terminal goal.

            The intelligence of a system has nothing to do with the terminal goal. The magnitude of intelligence merely tells us how well the system works in accordance with the terminal goal.

            Being self aware is merely a step in the direction of being more and more intelligent. If a system requires interaction with its surroundings, it needs to be able to recognise that it itself is different from its environment.

            You are such an intelligent system as well. It’s just that instead of having one terminal goal, you have many terminal goals (some may change with time while some might not).

            You (this intelligent system) exist in a biological structure. You are nothing but data encoded in a biological form factor, with algorithms that execute through biological processes. If this data and these algorithms are executed on a non biological form factor, would it be any different from you?

            LLMs work on some principles that our brains work on as well. Can you see how my point above applies?

            • Omega_Haxors
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              edit-2
              8 months ago

              It’s like you didn’t even read what I posted. Why do I even bother? Sophists literally don’t care about facts.

              • UraniumBlazer@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                8 months ago

                Yes, I read what you posted and answered accordingly. Only, I didn’t spend enough time dumbing it down further. So let me dumb it down.

                Your main objection was the simplicity of the goal of LLMs- predicting the next word that occurs. Somehow, this simplistic goal makes the system stupid.

                In my reply, I first said that self awareness occurs naturally after a system becomes more and more intelligent. I explained the reason as to why. I then went on to explain how a simplistic terminal goal has nothing to do with actual intelligence. Hence, no matter how stupid/simple a terminal goal is, if an intelligent system is challenged enough and given enough resources, it will develop sentience at a given point in time.

                • Omega_Haxors
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  4
                  ·
                  8 months ago

                  Exactly I literally said none of that shit you’re just projecting your own shitty views onto me and asking me to defend them.

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      8 months ago

      Consciousness is an illusion. Which is why it’s so hard to find, or even define. However it’s a critical illusion.

      If our mind’s are akin to an orchestra, then consciousness is akin to the conductor. Critically however, an orchestra can still play without a literal conductor. Each of the instruments can play off each other, and so create the appearance of a conductor. The “fake” conductor provides a sense of global direction., and keeps the orchestra in harmony.

      Our consciousness is a ghost in the machine. It exists no more than the world of a TV series exists. Yet its false existence is critical to maintaining coherency.

      Current “AIs” lack enough parts to create anything like this illusion. I suspect we will know it when it happens, though its form could be vastly different from ours.

      • UraniumBlazer@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        8 months ago

        You have provided a descriptive statement. Descriptive statements should come with scientific evidence. What evidence do you have to support your orchestra analogy? Or is it just your hypothesis?

        Spoiler alert: It is just your hypothesis, as you would’ve won a Nobel had you managed to generate evidence explaining consciousness in further detail.

        Many like to point at the Chinese room experiment to show how LLMs imitate consciousness rather than being conscious. They however forget, that our brains are Chinese rooms too in this regard, in that they learn how to provide the best responses to external stimuli while remaining blackboxes (at least for current tech).

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          8 months ago

          Sadly my evidence is mostly anecdotal or philosophical in nature. A lot of it stems from how ADHD and Autism alter the brain. The orchestral analogy works well as a good number of people for communicating changes in functionality, from an experience perspective.

          It also works well for explaining how a system can appear to have a singular controller, without such a controller actually existing.

          Ultimately however, it is philosophical in nature. It does anchor well to, and is reasonably consistent with, our current existing understandings of consciousness however.

          Consciousness is very obvious from the inside. There also seems to be no “seat of consciousness” within the brain. Conversely, there are multiple areas of the brain that cause consciousness to collapse, if damaged. We also see radical changes in consciousness with both epilepsy and strokes. This proves that it is highly dependent on the underlying brain structure (since stroke damage will change it) and on longer range communication (which epilepsy disrupts).

          The music of an orchestra follows similar patterns. Eliminate the woodwind, and the music fundamentally changes, deafen the violins, and it will change in a different way. The large scale interplay produces an effect far greater than the sum of its parts.

        • Omega_Haxors
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          edit-2
          8 months ago

          You could reduce any fact to an unknown with that type of troll reasoning. You can never know anything for a fact but you can get pretty damn close, and you absolutely can rule out anything that contradicts. The idea that an LLM could gain consciousness contradicts the fact they lack memory and the ability to learn/grow. They’re called machine learning but all the learning happens before they deploy.

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            8 months ago

            You could reduce any fact to an unknown with that type of troll reasoning.

            Sorry that I came across as a troll. That was not my intent.

            You can never know anything for a fact but you can get pretty damn close, and you absolutely can rule out anything that contradicts.

            Lmao this statement itself is a contradiction. You first say how “you can never know anything for sure” in regards to descriptive statements about reality. Then, in the same statement, you make a statement relating to the laws of logic (which by the way are descriptive statements about reality) and say that you are absolutely sure of this statement.

            Serious answer though - the scientific method is based on a couple of axioms. Assuming that these axioms are true, yes, you can be absolutely sure about the nature of things.

            The idea that an LLM could gain consciousness contradicts the fact they lack memory and the ability to learn/grow.

            You lack the understanding of how LLMs work. Please see how neural networks specifically work. They do learn and they do have memory. In fact, memory is the biggest reason why you can’t run ChatGPT on your smartphone.

            They’re called machine learning but all the learning happens before they deploy.

            Untrue. Please learn how machine learning works.

            • Omega_Haxors
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              8 months ago

              I’m done, i’m just going to start blocking you lot because you’re completely immune to reason.

              • UraniumBlazer@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                8 months ago

                I’m sorry you feel that way. However, don’t you think it would be more helpful to point at the holes in my reasoning?

                • Omega_Haxors
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  edit-2
                  8 months ago

                  I could, but I won’t. I’m saving my mental health by not engaging with debate perverts who only care about winning.

                  • UraniumBlazer@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    8 months ago

                    I don’t know what a “debate pervert” is. However, what I know is that I wasn’t engaging with the intent of “winning” or something.

      • Omega_Haxors
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        8 months ago

        Not to poo-poo your point too much but consciousness is a real thing; it lives in our gray matter. It’s why people with prion diseases who lose white brain matter will feel normal but suddenly find themselves unable to do basic things or recall memories. Just because it’s a transient property doesn’t mean that it isn’t real, it just means you have to factor in time as well as space in order to find it.