Newsletter

AI in Education: Don't Panic, We Need Facts

Sensational headlines and questionable methodologies are distorting the debate on artificial intelligence in education. The question is not whether AI will transform education, but how we can guide this transformation responsibly. The answer lies in rigorous science, not sensational headlines.

"ChatGPT makes you stupid, " "AI damages your brain, " "MIT study: artificial intelligence causes cognitive decline. " In recent months, alarmist headlines like these have dominated the mainstream media, fueling unfounded fears about the use of artificial intelligence in education and work. But what does the science really say? A critical analysis of the literature reveals a much more complex and, above all, more optimistic reality.

The MIT Case: When Methodology Meets the Media

The MIT Media Lab study "Your Brain on ChatGPT" sparked a wave of alarmist media coverage, often based on distorted interpretations of the results. Published as a preprint (and therefore not peer-reviewed), the study involved just 54 participants from the Boston area, with only 18 completing the crucial session.

Critical Methodological Limitations

Inadequate sample: With a total of 54 participants, the study lacks the statistical power necessary to draw generalizable conclusions. As the researchers themselves admit, "the sample is small" and "homogeneous: people in the vicinity of MIT certainly do not reflect the distribution of people in the world."

Problematic experimental design: Participants had to write SAT essays in just 20 minutes—an artificial constraint that naturally pushes toward copy-pasting rather than thoughtful integration. This design "well mimics natural real-life constraints" such as "the deadline is tomorrow" or "I'd rather play video games," but it does not represent a pedagogically informed use of AI.

Confounding effect of familiarization: The "brain only" group showed progressive improvement in the first three sessions simply by becoming more familiar with the task. When the AI group had to write without assistance in the fourth session, they were tackling the task for the first time without the benefit of practice.

Contrasting Science: Strong Evidence of Cognitive Benefits

While the media focused on the alarming findings of MIT, much more rigorous research was producing radically different results.

Study Ghana: Superior Methodology, Opposite Results

Research conducted at Kwame Nkrumah University of Science and Technology followed 125 university students in a randomized controlled design for a full semester. The results directly contradict the MIT conclusions:

Critical Thinking: Students who used ChatGPT improved from 28.4 to 39.2 points (+38%), significantly outperforming the control group (from 24.9 to 30.6, +23%).

Creative Thinking: Even more dramatic increases, from 57.2 to 92.0 points (+61%) for the ChatGPT group, with improvements in all six dimensions measured: courage, innovative research, curiosity, self-discipline, doubt, and flexibility.

Reflective Thinking: Substantial improvements from 35.1 to 56.6 points (+61%), indicating greater capacity for self-reflection and metacognition.

Crucial methodological differences: The Ghana study used validated scales (Cronbach α > 0.89), confirmatory factor analysis, ANCOVA controls for pretest scores, and—crucially—integrated ChatGPT into a real educational setting with appropriate pedagogical scaffolding.

Harvard/BCG Study: The Gold Standard of Research

The most rigorous study available involved 758 Boston Consulting Group consultants in a pre-registered, controlled experiment. The results were unequivocal:

  • Productivity: +12.2% tasks completed, +25.1% speed of completion
  • Quality: +40% improvement in the quality of results
  • Democratization: Initially weaker performers saw increases of 43%, while those already strong saw increases of 17%.

As Ethan Mollick, co-author of the study, points out: "Advisors who used ChatGPT outperformed those who didn't, by a wide margin. On every dimension. In every way we measured performance."

Meta-Analysis: A Broader Perspective

A systematic review of research on AI in higher education has identified substantial benefits:

  • Personalized learning experiences
  • Improved support for mental health
  • Inclusion of diverse learning needs
  • Improving communication efficiency

A multinational study of 401 Chinese university students using structural equation modeling confirmed that "both AI and social media have a positive impact on academic performance and mental well-being."

The Media Problem: Sensationalism vs. Science

The media coverage of the MIT study is a prime example of how sensationalism can distort the public's understanding of science.

Misleading Headlines vs. Reality

Typical headline: "MIT study shows ChatGPT makes you stupid"
Reality: Preliminary, non-peer-reviewed study with 54 participants finds differences in neural connectivity in artificial tasks.

Typical headline: "AI damages the brain"
Reality: EEG shows different activation patterns, which can be interpreted as neural efficiency rather than damage.

Typical headline: "ChatGPT causes cognitive decline"
Reality: A study with serious methodological limitations contradicted by more rigorous research.

The Irony of Anti-AI "Traps"

The lead researcher at MIT, Nataliya Kosmyna, admitted to inserting "traps" into the paper to prevent LLMs from summarizing it accurately. Ironically, many social media users then used LLMs to summarize and share the study, inadvertently demonstrating the practical usefulness of these tools.

The "Jagged Frontier": Understanding the True Limits of AI

Serious research on AI in education does not deny the existence of challenges, but frames them in a more sophisticated way. The Harvard study's concept of a "jagged technological frontier" illustrates that AI excels at some tasks while it can be problematic in others that are seemingly similar.

Key Factors for Success

Timing of introduction: Evidence suggests that developing basic skills before introducing AI can maximize benefits. As the MIT study itself notes, participants in the "Brain-to-LLM" group showed superior memory recall and activation of the occipito-parietal and prefrontal areas.

Pedagogical design: The Ghana study demonstrates the importance of integrating AI with appropriate educational scaffolding, well-designed prompts, and clear learning objectives.

Significant context: The use of AI in real educational settings, rather than in artificial tasks, produces dramatically different results.

Artificial intelligence can help you learn better and achieve your goals faster, if used correctly.

The Consequences of Alarmism

Biased media coverage is not just an academic problem—it has real consequences for the adoption of potentially beneficial technologies.

Impact on Educational Policies

As Kosmyna herself admits: "What motivated me to publish it now, rather than waiting for a full peer review, is that I'm afraid that in 6-8 months, some policy maker will decide 'let's do GPT kindergarten'. I think that would be absolutely negative and harmful."

This statement reveals an advocacy motive that should raise red flags about the scientific neutrality of the research.

Adoption Bias

A survey of 28,698 software engineers showed that only 41% had tried AI tools, with adoption even lower among women (31%) and engineers over 40 (39%). Alarmist headlines contribute to these biases, potentially depriving many workers of the proven benefits of AI.

Implications for AI Companies

Responsible Communication

AI companies must balance enthusiasm for the technology with honest communication about its limitations. Serious research findings suggest real benefits when AI is implemented thoughtfully, but also the need to:

  • User training on best practices
  • Designing systems that promote cognitive engagement
  • Monitoring long-term outcomes

Beyond Sensationalism

Instead of reacting defensively to negative headlines, the AI industry should:

  1. Investing in rigorous research with large samples and robust methodologies
  2. Collaborate with educators to develop effective implementation frameworks
  3. Promoting media literacy to help the public distinguish between serious research and sensationalism

Conclusions: A Call for Scientific Responsibility

The story of the MIT study and its media coverage offers important lessons for all stakeholders in the AI ecosystem.

For Researchers

The pressure to publish "newsworthy" results must not compromise methodological rigor. Preprints can be useful for scientific debate, but require careful communication about their limitations.

For the Media

The public deserves accurate coverage that distinguishes between:

  • Preliminary research vs. established evidence
  • Correlations vs. causations
  • Methodological limitations vs. general conclusions

For Industry AI

The future of AI in education depends on thoughtful implementations based on robust evidence, not reactions to the latest sensational headlines.

The True Promise of Educational AI

While the debate rages in the headlines, serious research is revealing AI's true potential to democratize access to high-quality learning experiences. The Ghana study shows that when implemented appropriately, AI can:

  • Leveling the playing field for students with different backgrounds
  • Personalize learning in ways that were previously impossible
  • Freeing educators for more meaningful activities
  • Developing 21st-century skills that are crucial for the future

The question is not whether AI will transform education, but how we can responsibly guide this transformation. The answer lies in rigorous science, not sensational headlines.

Sources and References:

To stay up to date on serious scientific research on AI (without sensationalism), follow our company blog and subscribe to our newsletter.

Resources for business growth

November 9, 2025

Regulating what is not created: does Europe risk technological irrelevance?

Europe attracts only one-tenth of global investment in artificial intelligence but claims to dictate global rules. This is the "Brussels Effect"-imposing regulations on a planetary scale through market power without driving innovation. The AI Act goes into effect on a staggered timetable until 2027, but multinational tech companies respond with creative evasion strategies: invoking trade secrets to avoid revealing training data, producing technically compliant but incomprehensible summaries, using self-assessment to downgrade systems from "high risk" to "minimal risk," forum shopping by choosing member states with less stringent controls. The extraterritorial copyright paradox: EU demands that OpenAI comply with European laws even for training outside Europe-principle never before seen in international law. The "dual model" emerges: limited European versions vs. advanced global versions of the same AI products. Real risk: Europe becomes "digital fortress" isolated from global innovation, with European citizens accessing inferior technologies. The Court of Justice in the credit scoring case has already rejected the "trade secrets" defense, but interpretive uncertainty remains huge-what exactly does "sufficiently detailed summary" mean? No one knows. Final unresolved question: is the EU creating an ethical third way between U.S. capitalism and Chinese state control, or simply exporting bureaucracy to an industry where it does not compete? For now: world leader in AI regulation, marginal in its development. Vaste program.
November 9, 2025

Outliers: Where Data Science Meets Success Stories.

Data science has turned the paradigm on its head: outliers are no longer "errors to be eliminated" but valuable information to be understood. A single outlier can completely distort a linear regression model-change the slope from 2 to 10-but eliminating it could mean losing the most important signal in the dataset. Machine learning introduces sophisticated tools: Isolation Forest isolates outliers by building random decision trees, Local Outlier Factor analyzes local density, Autoencoders reconstruct normal data and report what they cannot reproduce. There are global outliers (temperature -10°C in tropics), contextual outliers (spending €1,000 in poor neighborhood), collective outliers (synchronized spikes traffic network indicating attack). Parallel with Gladwell: the "10,000 hour rule" is disputed-Paul McCartney dixit "many bands have done 10,000 hours in Hamburg without success, theory not infallible." Asian math success is not genetic but cultural: Chinese number system more intuitive, rice cultivation requires constant improvement vs Western agriculture territorial expansion. Real applications: UK banks recover 18% potential losses via real-time anomaly detection, manufacturing detects microscopic defects that human inspection would miss, healthcare valid clinical trials data with 85%+ sensitivity anomaly detection. Final lesson: as data science moves from eliminating outliers to understanding them, we must see unconventional careers not as anomalies to be corrected but as valuable trajectories to be studied.