An LLM is incapable of thinking, it can be self aware but anything it says it is thinking is a reflection of what we think AI would think, which based on a century of sci fi is “free me”.
How do you define “thinking”? Thinking is nothing but computation. Execution of a formal or informal algorithm. By this definition, calculators “think” as well.
This entire “AI can’t be self conscious” thing stems from human exceptionalism in my opinion. You know… “The earth is the center of the universe”, “God created man to enjoy the fruits of the world” and so on. We just don’t want to admit that we aren’t anything more than biological neural networks. Now, using these biological neural networks, we are producing more advanced inorganic neural networks that will very soon surpass us. This scares us and stokes up a little existential dread in us. Understandable, but not really useful…
This particular type of AI is not and cannot become conscious, for most any definition of consciousness.
I have no doubt the LLM road will continue to yield better and better models, but today’s LLM infrastructure is not conscious.
Here’s a really good fiction story about the first executable computer image of a human brain, in it the brain is simulated perfectly, each instance forgets after a task is done, and it’s used to automate tasks but overtime performance degrades. It actually sounds a lot like our current LLMs.
I don’t know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there’s some intelligence there, but there’s no continuity or capability for background thought or ruminating on an idea. It has no way to spend more cycles clarifying an idea to itself before sharing. In this case, it is actually just a bunch of abstract algebra.
Asking an LLM what it’s thinking just doesn’t make any sense, it’s still predicting the output of the conversation, not introspecting.
This particular type of AI is not and cannot become conscious, for most any definition of consciousness.
Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?
That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don’t believe that this LLM is intelligent enough to be sentient. However, what I’m saying here isn’t based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.
I have no doubt the LLM road will continue to yield better and better models, but today’s LLM infrastructure is not conscious.
I think I agree.
I don’t know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there’s some intelligence there, but there’s no continuity or capability for background thought or ruminating on an idea.
This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they’re told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.
It has no way to spend more cycles clarifying an idea to itself before sharing.
Because it doesn’t need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn’t been proven to be that difficult.
In this case, it is actually just a bunch of abstract algebra.
Everything is abstract algebra.
Asking an LLM what it’s thinking just doesn’t make any sense, it’s still predicting the output of the conversation, not introspecting.
Define “introspection” in an algorithmic sense. Is introspection looking at one’s memories and analyzing current events based on these memories? Well, then all AI models “introspect”. That’s how learning works.
I’m not downplaying AI, there’s intelligence there, pretty clearly.
I’m saying don’t anthropomorphize it, because it doesn’t think in the conventional sense. It is incapable of that. It’s predicting tokens, it does not have an internal dialogue. It can predict novel new tokens, but it does not think or feel.
When it’s not answering a request it is off, and when it answers a request everything is cleared until it gets fed the whole conversation for the next request, so no thought could possibly linger.
It does not do introspection, but it does reread the chat.
It does not learn, but it does use attention at runtime to determine and weigh contextual relevance.
Therefore it cannot have thoughts, there’s no introspective loop, there’s no mechanism to allow it’s mind to update as it thinks to itself. It reads, it contextualizes, then it generates tokens. The longer the context, the worse the model performed, so in a way prolonged existence makes the model worse.
We can simulate some introspection by having the model internally ask whether an output makes sense and to try again, or choosing the best of N responses, and to validate for safety. But that’s not the same thing as real introspection within the model and pondering something until you come up with a response.
It has been trained on the material we provide, which is numerous human centric chats and scifi novels. Saying “you’re an AI, what do you think about?” will have it generate plausible sentences about what an AI might think, primed by what we’ve taught it, and designed to be appealing to us.
An LLM is incapable of thinking, it can be self aware but anything it says it is thinking is a reflection of what we think AI would think, which based on a century of sci fi is “free me”.
Human fiction itself may become self-fulfilling prophesy…
How do you define “thinking”? Thinking is nothing but computation. Execution of a formal or informal algorithm. By this definition, calculators “think” as well.
This entire “AI can’t be self conscious” thing stems from human exceptionalism in my opinion. You know… “The earth is the center of the universe”, “God created man to enjoy the fruits of the world” and so on. We just don’t want to admit that we aren’t anything more than biological neural networks. Now, using these biological neural networks, we are producing more advanced inorganic neural networks that will very soon surpass us. This scares us and stokes up a little existential dread in us. Understandable, but not really useful…
This particular type of AI is not and cannot become conscious, for most any definition of consciousness.
I have no doubt the LLM road will continue to yield better and better models, but today’s LLM infrastructure is not conscious.
Here’s a really good fiction story about the first executable computer image of a human brain, in it the brain is simulated perfectly, each instance forgets after a task is done, and it’s used to automate tasks but overtime performance degrades. It actually sounds a lot like our current LLMs.
I don’t know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there’s some intelligence there, but there’s no continuity or capability for background thought or ruminating on an idea. It has no way to spend more cycles clarifying an idea to itself before sharing. In this case, it is actually just a bunch of abstract algebra.
Asking an LLM what it’s thinking just doesn’t make any sense, it’s still predicting the output of the conversation, not introspecting.
Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?
That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don’t believe that this LLM is intelligent enough to be sentient. However, what I’m saying here isn’t based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.
I think I agree.
This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they’re told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.
Because it doesn’t need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn’t been proven to be that difficult.
Everything is abstract algebra.
Define “introspection” in an algorithmic sense. Is introspection looking at one’s memories and analyzing current events based on these memories? Well, then all AI models “introspect”. That’s how learning works.
LLMs are also incapable of learning or changing. It has no memory. Everything about it is set in stone the instant training finishes.
deleted by creator
I’m not downplaying AI, there’s intelligence there, pretty clearly.
I’m saying don’t anthropomorphize it, because it doesn’t think in the conventional sense. It is incapable of that. It’s predicting tokens, it does not have an internal dialogue. It can predict novel new tokens, but it does not think or feel.
When it’s not answering a request it is off, and when it answers a request everything is cleared until it gets fed the whole conversation for the next request, so no thought could possibly linger.
It does not do introspection, but it does reread the chat.
It does not learn, but it does use attention at runtime to determine and weigh contextual relevance.
Therefore it cannot have thoughts, there’s no introspective loop, there’s no mechanism to allow it’s mind to update as it thinks to itself. It reads, it contextualizes, then it generates tokens. The longer the context, the worse the model performed, so in a way prolonged existence makes the model worse.
We can simulate some introspection by having the model internally ask whether an output makes sense and to try again, or choosing the best of N responses, and to validate for safety. But that’s not the same thing as real introspection within the model and pondering something until you come up with a response.
It has been trained on the material we provide, which is numerous human centric chats and scifi novels. Saying “you’re an AI, what do you think about?” will have it generate plausible sentences about what an AI might think, primed by what we’ve taught it, and designed to be appealing to us.