Imagine a runaway train heading toward five people. You can pull a lever to divert it onto another track, but there is only one person there. What would you do?
But wait: what if that person were a child and the five were elderly? What if someone offered you money to pull the lever? What if you couldn't see the situation clearly?
What is the Trolley Problem? Formulated by philosopher Philippa Foot in 1967, this thought experiment presents a seemingly simple dilemma: sacrifice one life to save five. But the variations are endless: the fat man to be pushed off the bridge, the doctor who could kill one healthy patient to save five with his organs, the judge who could convict an innocent person to stop a riot.
Each scenario tests our fundamental moral principles: when is it acceptable to cause harm in order to prevent greater harm?
This complexity is precisely what makes the ethics of artificial intelligence such a crucial challenge for our time.
The famous "trolley problem" is much more complex than it seems—and this complexity is precisely what makes the ethics of artificial intelligence such a crucial challenge for our time.
The trolley problem, formulated by philosopher Philippa Foot in 1967, was never intended to solve practical dilemmas. As theAlan Turing Institute, the original purpose was to demonstrate that thought experiments are, in essence, divorced from reality. Yet, in the age of AI, this paradox has taken on immediate relevance.
Why is this important now? Because for the first time in history, machines must make ethical decisions in real time—from autonomous cars navigating traffic to healthcare systems allocating limited resources.
Anthropic, the company behind Claude, tackled this challenge with a revolutionary approach called Constitutional AI. Instead of relying solely on human feedback, Claude is trained on a "constitution" of explicit ethical principles, including elements of the Universal Declaration of Human Rights.
How does it work in practice?
Anempirical analysis of 700,000 conversations revealed that Claude expresses over 3,000 unique values, from professionalism to moral pluralism, adapting them to different contexts while maintaining ethical consistency.
As brilliantly illustrated by the interactive project Absurd Trolley Problems by Neal Agarwal brilliantly illustrates, real-world ethical dilemmas are rarely binary and often absurd in their complexity. This insight is crucial to understanding the challenges of modern AI.
Recent research shows that the ethical dilemmas of AI go far beyond the classic trolley problem. The MultiTP project MultiTPproject, which tested 19 AI models in over 100 languages, found significant cultural variations in ethical alignment: models are more aligned with human preferences in English, Korean, and Chinese, but less so in Hindi and Somali.
The real challenges include:
An often overlooked aspect is that AI ethics may not simply be an imperfect version of human ethics, but a completely different paradigm—and in some cases, potentially more consistent.
The Case of "I, Robot": In the 2004 film, Detective Spooner (Will Smith) is suspicious of robots after being saved by one in a car accident, while a 12-year-old girl was left to drown. The robot explains its decision:
"I was the logical choice. I calculated that she had a 45% chance of survival. Sarah only had 11%. That was someone's child. 11% is more than enough."
This is exactly the kind of ethics that AI operates on today: algorithms that weigh probabilities, optimize results, and make decisions based on objective data rather than emotional insights or social biases. The scene illustrates a crucial point: AI operates with ethical principles that are different but not necessarily inferior to human ones:
Concrete examples in modern AI:
However, before celebrating the superiority of AI ethics, we must confront its inherent limitations. The scene from "I, Robot" that seems so logical hides profound problems:
The Problem of Lost Context: When the robot chooses to save the adult instead of the child based on probabilities, it completely ignores crucial elements:
The Concrete Risks of Purely Algorithmic Ethics:
Extreme Reductionism: Turning complex moral decisions into mathematical calculations can remove human dignity from the equation. Who decides which variables matter?
Hidden Biases: Algorithms inevitably incorporate the biases of their creators and training data. A system that "optimizes" could perpetuate systemic discrimination.
Cultural Uniformity: AI ethics risks imposing a Western, technological, and quantitative view of morality on cultures that value human relationships differently.
Examples of real challenges:
Experts such as Roger Scruton criticize the use of the trolley problem for its tendency to reduce complex dilemmas to "pure arithmetic," eliminating morally relevant relationships. As argued in an article in TripleTen, "solving the trolley problem will not make AI ethical"—a more holistic approach is needed.
The central question becomes: Can we afford to delegate moral decisions to systems that, however sophisticated, lack empathy, contextual understanding, and human experiential wisdom?
New proposals for balance:
For business leaders, this evolution requires a nuanced approach:
As highlighted by IBM in its 2025 outlook, AI literacy and clear accountability will be the most critical challenges for the coming year.
UNESCO is leadingUNESCO is leading global initiatives for AI ethics, with the 3rd Global Forum scheduled for June 2025 in Bangkok. The goal is not to find universal solutions to moral dilemmas, but to develop frameworks that enable transparent and culturally sensitive ethical decisions.
The key lesson? The trolley problem serves not as a solution, but as a reminder of the inherent complexity of moral decisions. The real challenge is not choosing between human or algorithmic ethics, but finding the right balance between computational efficiency and human wisdom.
The ethical AI of the future will have to recognize its own limitations: excellent at processing data and identifying patterns, but inadequate when empathy, cultural understanding, and contextual judgment are required. As in the scene from "I, Robot," cold calculation can sometimes be more ethical—but only if it remains a tool in the hands of conscious human supervision, not a substitute for human moral judgment.
The "(or perhaps not)" in our title is not indecision, but wisdom: recognizing that ethics, whether human or artificial, does not allow for simple solutions in a complex world.
Initial Inspiration:
Academic Research:
Industrial Analysis:
Regulatory Developments: