van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.
This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
Unfortunately, as I was looking more into it, I’ve stumbled upon a paper that points out some key problems with the proof. I haven’t looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.
Doesn’t that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.
I don’t think “intelligence” needs to have a perfect “solution”, it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it’s absolutely not optimal.
You can still brute force it, which is more or less how back propagation works.
Intractable problems of that scale can’t be brute forced because the brute force solution can’t be run within the time scale of the universe, using the resources of the universe. If we’re talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it’s not a real solution.
They did! Here’s a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.
This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
Thank you, it was an interesting read.
Unfortunately, as I was looking more into it, I’ve stumbled upon a paper that points out some key problems with the proof. I haven’t looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.
https://arxiv.org/html/2411.06498v1
Doesn’t that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.
I don’t think “intelligence” needs to have a perfect “solution”, it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it’s absolutely not optimal.
Intractable problems of that scale can’t be brute forced because the brute force solution can’t be run within the time scale of the universe, using the resources of the universe. If we’re talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it’s not a real solution.