The Limits of AI: Why Human Intelligence and Ingenuity Remain Unparalleled

LucilleSci/Tech2025-06-202840

In a recent article, Gary Marcus highlighted the limitations of simply scaling up compute resources in the pursuit of generative artificial intelligence. He pointed out that even with billion-dollar investments, AIs still fail to solve puzzles that a child can do, prompting a need for a reevaluation of the hype surrounding AI. However, Marcus did not address the real reason why a seven-year-old child can solve the Tower of Hanoi puzzle that broke computers: we are embodied animals and we live in the world. All living things are born to explore, and we do so with all our senses, from birth. This gives us a model of the world and everything in it. We can infer general truths from a few instances, which no computer can do. For example, to teach a large language model "cat," you have to show it tens of thousands of individual images of cats. Even then, if it comes upon a cat playing with a bath plug, it may fail to recognize it as a cat. In contrast, a human child can be shown two or three cats and from interacting with them, it will recognize any cat as a cat for life. Apart from anything else, this embodied, evolved intelligence makes us incredibly energy-efficient compared to a computer. The computers that drive an autonomous car use anything upwards of a kilowatt of energy, while a human driver runs on twentysomething watts of renewable power – and we don’t need an extra bacon sandwich to remember a new route. At a time of climate emergency, the vast energy demands of this industry might lead us to recognize and value the extraordinary economy, versatility, plasticity, ingenuity, and creativity of human intelligence – qualities that we all have simply by virtue of being alive. It is not surprising that Apple researchers have found "fundamental limitations" in cutting-edge artificial intelligence models. AI in the form of large reasoning models or large language models (LLMs) are far from being able to "reason." This can be simply tested by asking ChatGPT or similar: "If 9 plus 10 is 18, what is 18 less 10?" The response today was 8. Other times, I’ve found that it provided no definitive answer. This highlights that AI does not reason – currently, it is a combination of brute force and logic routines to essentially reduce the brute force approach. A term that should be given more publicity is ANI – artificial narrow intelligence, which describes systems like ChatGPT that are excellent at summarizing pertinent information and rewording sentences but are far from being able to reason. However, note that the more times LLMs are asked similar questions, the more likely it will provide a more reasonable response. Again, though, this is not reasoning; it is model training. As we continue to develop AI technologies, we must remember that true reasoning and understanding go beyond mere computational power and logic routines. We must strive for a new approach that harnesses the full potential of human intelligence and its unique qualities.

Post a message

您暂未设置收款码

请在主题配置——文章设置里上传