The daily use of ChatGPT by more than 900 million people generates a scientific alert about cognitive decline. Various studies warn that delegating intellectual tasks to artificial intelligence atrophies reasoning and harms long-term memory.
AI reduces brain activation
An investigation by the Massachusetts Institute of Technology monitored fifty volunteers with electroencephalograms. The data showed that the use of AI tools decreases neural connectivity and reduces the activation of specific brain areas. Participants who used ChatGPT recurrently did not match the results of those who worked without help when they had to use their own resources.
Experts call this phenomenon cognitive debt or cognitive surrender. It is the accumulated brain cost of entrusting problem-solving and critical thinking to chatbots. This dynamic weakens neural connections and limits the user's intellectual autonomy.
"Delegating all our thinking to a machine may seem like a good idea at first but, in the long run, it is a disadvantage for the brain." - Rodrigo Quian Quiroga, neuroscientist and ICREA professor at the Hospital del Mar Research Institute
A joint analysis by Microsoft and Carnegie Mellon University corroborates these risks. The report points out that the improper use of systems such as Gemini, Claude, or Grok causes the deterioration of essential cognitive faculties. The technology industry ignores these immediate threats to mental health and focuses on future or dystopian risks.
The human brain loses critical thinking
Ignacio Morgado Bernal, emeritus professor of Psychobiology at the UAB, observes that the irruption of these tools will alter brain function. The specialist insists that it must be observed how neural connections are modified by lack of use or by new cognitive requirements.
Rodrigo Quian Quiroga qualifies that the use of digital gadgets modifies cerebral operations while allowing tasks to be delegated. The crux of the matter lies in distinguishing which functions can be externalized and which cannot. It is one thing to trust ChatGPT for mechanical tasks and quite another to allow it to make important decisions without supervision.
ChatGPT does not possess original ideas or the capacity for independent judgment. These systems are statistical models programmed to offer plausible responses aligned with the user's vision. They do not function as oracles dictating absolute truth.
The educational sphere is already reacting to this reality. A study from Shanghai University recommends extreme caution in schools so as not to harm the development of critical thinking. More than nine hundred teachers in the Netherlands have requested to halt the uncritical adoption of artificial intelligence in academia.
In Spain, numerous teachers are modifying their evaluation methods. They seek to prevent ChatGPT from completely replacing the learning process. Research from Peking University states that excessive dependence fosters passive or inert thinking. Blind trust in the machine's answers drastically reduces autonomous cognitive activity.