“LLMs don’t understand what they say, they just try to sound like they do” is a sentence that denotes a good understanding of how AI works, as expected from an “expert on AI”. However, it makes a comparison with human intelligence that either assumes we know how it works, or shows a fundamental misunderstanding of how it works. For all we know, either our brain is a mystery (and thus we can’t really state whether an AI “understands” anything, since we can’t even define what that means), or, as research on neurobiology seems to indicate, it’s just large-scale deep learning, with more ad-hockery for evolutionary stuff, and two orders of magnitude more energy-efficient.
To be clear, I’m not trying to say that AI is sentient, as I do not believe that. My point is that, as far as understanding is concerned, we don’t really know enough about the working of either AI or ourselves to really make a distinction between their “understanding” of concepts and ours. We don’t know how ChatGPT works, and we don’t know how our brain works, so stating that there is any fundamental difference between the way both work is no more correct than any random statement, especially when the concept itself (understanding) isn’t even formally defined.
“LLMs don’t understand what they say, they just try to sound like they do” is a sentence that denotes a good understanding of how AI works, as expected from an “expert on AI”. However, it makes a comparison with human intelligence that either assumes we know how it works, or shows a fundamental misunderstanding of how it works. For all we know, either our brain is a mystery (and thus we can’t really state whether an AI “understands” anything, since we can’t even define what that means), or, as research on neurobiology seems to indicate, it’s just large-scale deep learning, with more ad-hockery for evolutionary stuff, and two orders of magnitude more energy-efficient.
True. Anyone who has studied AI to a basic degree will know that any form of AI we have is far from sentient.
To be clear, I’m not trying to say that AI is sentient, as I do not believe that. My point is that, as far as understanding is concerned, we don’t really know enough about the working of either AI or ourselves to really make a distinction between their “understanding” of concepts and ours. We don’t know how ChatGPT works, and we don’t know how our brain works, so stating that there is any fundamental difference between the way both work is no more correct than any random statement, especially when the concept itself (understanding) isn’t even formally defined.