Newsletter

Machines that learn (even) from our mistakes The boomerang effect: we teach the AI our faults and it gives them back to us... multiplied!

AI inherits our biases-and then amplifies them. We see the biased results-and reinforce them. A self-feeding cycle. A UCL study: a 4.7% bias in facial recognition rose to 11.3% after human-AI interactions. In HR, each cycle increases gender bias by 8-14%. The good news? The "algorithmic mirror" technique--showing managers what their choices would look like if made by an AI--reduces bias by 41%.

Some recent research have pointed out an interesting phenomenon: there is a "bidirectional" relationship between the biases in artificial intelligence models and those in human thinking.

This interaction creates a mechanism that tends to amplify cognitive distortions in both directions..

This research shows that AI systems not only inherit human biases from training data, but when implemented can intensify them, in turn influencing people's decision-making processes. This creates a cycle that, if not properly managed, is likely to progressively increase the initial biases.

This phenomenon is particularly evident in important areas such as:

In these areas, small initial biases can amplify through repeated interactions between human operators and automated systems, gradually turning into significant differences in results.

The origins of prejudice

In human thought

The human mind naturally uses "thinking shortcuts" that can introduce systematic errors into our judgments. The theory of "double thinking" distinguishes between:

  • Fast and intuitive thinking (prone to stereotyping)
  • Slow and reflective thinking (able to correct biases)

For example, in the medical field, physicians tend to give too much weight to initial hypotheses, neglecting contrary evidence. This phenomenon, called "confirmation bias," is replicated and amplified by AI systems trained on historical diagnostic data.

In AI models

Machine learning models perpetuate biases mainly through three channels:

  1. Unbalanced training data reflecting historical inequalities
  2. Selection of characteristics that incorporate protected attributes (such as gender or ethnicity)
  3. Feedback loops resulting from interactions with already biased human decisions

One 2024 UCL study showed that facial recognition systems trained on emotional judgments made by people inherited a 4.7 percent tendency to label faces as "sad," and then amplified this tendency to 11.3 percent in subsequent interactions with users.

How they amplify each other

Data analysis of recruitment platforms shows that each round of human-algorithm collaboration increases gender bias by 8-14% through mutually reinforcing feedback mechanisms.

When HR professionals receive lists of candidates from AI that are already influenced by historical biases, their subsequent interactions (such as their choice of interview questions or performance evaluations) reinforce distorted representations of the model.

A 2025 meta-analysis of 47 studies found that three rounds of human-IA collaboration increased demographic disparities by 1.7-2.3 times in areas such as health care, lending and education.

Strategies for measuring and mitigating bias

Quantification through machine learning

The framework for measuring bias proposed by Dong et al. (2024) allows for the detection of bias without the need for "absolute truth" labels by analyzing discrepancies in decision-making patterns among protected groups.

Cognitive interventions

The "algorithmic mirror" technique developed by UCL researchers reduced gender bias in promotion decisions by 41 percent by showing managers what their historical choices would look like if they were made by an AI system.

Training protocols that alternate between AI assistance and autonomous decision making prove particularly promising, reducing the effects of bias transfer from 17 percent to 6 percent in clinical diagnostic studies.

Implications for society

Organizations that implement AI systems without considering interactions with human biases face amplified legal and operational risks.

Analysis of employment discrimination lawsuits shows that AI-assisted hiring processes increase plaintiffs' success rates by 28 percent compared to traditional human-driven cases, as traces of algorithmic decisions provide clearer evidence of disparate impact.

Toward an artificial intelligence that respects freedom and efficiency

The correlation between algorithmic bias and restrictions on freedom of choice requires us to rethink technological development from the perspective of individual responsibility and safeguarding market efficiency. It is crucial to ensure that AI becomes a tool for expanding opportunities, not restricting them.

Promising directions include:

  • Market solutions that incentivize the development of unbiased algorithms
  • Increased transparency in automated decision-making processes
  • Deregulation that promotes competition among different technological solutions

Only through responsible self-regulation of the industry, combined with freedom of choice for users, can we ensure that technological innovation continues to be an engine of prosperity and opportunity for all who are willing to put their skills to work.

Resources for business growth

November 9, 2025

Regulating what is not created: does Europe risk technological irrelevance?

Europe attracts only one-tenth of global investment in artificial intelligence but claims to dictate global rules. This is the "Brussels Effect"-imposing regulations on a planetary scale through market power without driving innovation. The AI Act goes into effect on a staggered timetable until 2027, but multinational tech companies respond with creative evasion strategies: invoking trade secrets to avoid revealing training data, producing technically compliant but incomprehensible summaries, using self-assessment to downgrade systems from "high risk" to "minimal risk," forum shopping by choosing member states with less stringent controls. The extraterritorial copyright paradox: EU demands that OpenAI comply with European laws even for training outside Europe-principle never before seen in international law. The "dual model" emerges: limited European versions vs. advanced global versions of the same AI products. Real risk: Europe becomes "digital fortress" isolated from global innovation, with European citizens accessing inferior technologies. The Court of Justice in the credit scoring case has already rejected the "trade secrets" defense, but interpretive uncertainty remains huge-what exactly does "sufficiently detailed summary" mean? No one knows. Final unresolved question: is the EU creating an ethical third way between U.S. capitalism and Chinese state control, or simply exporting bureaucracy to an industry where it does not compete? For now: world leader in AI regulation, marginal in its development. Vaste program.
November 9, 2025

Outliers: Where Data Science Meets Success Stories.

Data science has turned the paradigm on its head: outliers are no longer "errors to be eliminated" but valuable information to be understood. A single outlier can completely distort a linear regression model-change the slope from 2 to 10-but eliminating it could mean losing the most important signal in the dataset. Machine learning introduces sophisticated tools: Isolation Forest isolates outliers by building random decision trees, Local Outlier Factor analyzes local density, Autoencoders reconstruct normal data and report what they cannot reproduce. There are global outliers (temperature -10°C in tropics), contextual outliers (spending €1,000 in poor neighborhood), collective outliers (synchronized spikes traffic network indicating attack). Parallel with Gladwell: the "10,000 hour rule" is disputed-Paul McCartney dixit "many bands have done 10,000 hours in Hamburg without success, theory not infallible." Asian math success is not genetic but cultural: Chinese number system more intuitive, rice cultivation requires constant improvement vs Western agriculture territorial expansion. Real applications: UK banks recover 18% potential losses via real-time anomaly detection, manufacturing detects microscopic defects that human inspection would miss, healthcare valid clinical trials data with 85%+ sensitivity anomaly detection. Final lesson: as data science moves from eliminating outliers to understanding them, we must see unconventional careers not as anomalies to be corrected but as valuable trajectories to be studied.