Fabio Lauria

Machines that learn (even) from our mistakes The boomerang effect: we teach the AI our faults and it gives them back to us... multiplied!

April 13, 2025
Share on social media

Some recent research have pointed out an interesting phenomenon: there is a "bidirectional" relationship between the biases in artificial intelligence models and those in human thinking.

This interaction creates a mechanism that tends to amplify cognitive distortions in both directions..

This research shows that AI systems not only inherit human biases from training data, but when implemented can intensify them, in turn influencing people's decision-making processes. This creates a cycle that, if not properly managed, is likely to progressively increase the initial biases.

This phenomenon is particularly evident in important areas such as:

In these areas, small initial biases can amplify through repeated interactions between human operators and automated systems, gradually turning into significant differences in results.

The origins of prejudice

In human thought

The human mind naturally uses "thinking shortcuts" that can introduce systematic errors into our judgments. The theory of "double thinking" distinguishes between:

  • Fast and intuitive thinking (prone to stereotyping)
  • Slow and reflective thinking (able to correct biases)

For example, in the medical field, physicians tend to give too much weight to initial hypotheses, neglecting contrary evidence. This phenomenon, called "confirmation bias," is replicated and amplified by AI systems trained on historical diagnostic data.

In AI models

Machine learning models perpetuate biases mainly through three channels:

  1. Unbalanced training data reflecting historical inequalities
  2. Selection of characteristics that incorporate protected attributes (such as gender or ethnicity)
  3. Feedback loops resulting from interactions with already biased human decisions

One 2024 UCL study showed that facial recognition systems trained on emotional judgments made by people inherited a 4.7 percent tendency to label faces as "sad," and then amplified this tendency to 11.3 percent in subsequent interactions with users.

How they amplify each other

Data analysis of recruitment platforms shows that each round of human-algorithm collaboration increases gender bias by 8-14% through mutually reinforcing feedback mechanisms.

When HR professionals receive lists of candidates from AI that are already influenced by historical biases, their subsequent interactions (such as their choice of interview questions or performance evaluations) reinforce distorted representations of the model.

A 2025 meta-analysis of 47 studies found that three rounds of human-IA collaboration increased demographic disparities by 1.7-2.3 times in areas such as health care, lending and education.

Strategies for measuring and mitigating bias

Quantification through machine learning

The framework for measuring bias proposed by Dong et al. (2024) allows for the detection of bias without the need for "absolute truth" labels by analyzing discrepancies in decision-making patterns among protected groups.

Cognitive interventions

The "algorithmic mirror" technique developed by UCL researchers reduced gender bias in promotion decisions by 41 percent by showing managers what their historical choices would look like if they were made by an AI system.

Training protocols that alternate between AI assistance and autonomous decision making prove particularly promising, reducing the effects of bias transfer from 17 percent to 6 percent in clinical diagnostic studies.

Implications for society

Organizations that implement AI systems without considering interactions with human biases face amplified legal and operational risks.

Analysis of employment discrimination lawsuits shows that AI-assisted hiring processes increase plaintiffs' success rates by 28 percent compared to traditional human-driven cases, as traces of algorithmic decisions provide clearer evidence of disparate impact.

Toward an artificial intelligence that respects freedom and efficiency

The correlation between algorithmic bias and restrictions on freedom of choice requires us to rethink technological development from the perspective of individual responsibility and safeguarding market efficiency. It is crucial to ensure that AI becomes a tool for expanding opportunities, not restricting them.

Promising directions include:

  • Market solutions that incentivize the development of unbiased algorithms
  • Increased transparency in automated decision-making processes
  • Deregulation that promotes competition among different technological solutions

Only through responsible self-regulation of the industry, combined with freedom of choice for users, can we ensure that technological innovation continues to be an engine of prosperity and opportunity for all who are willing to put their skills to work.

Fabio Lauria

CEO & Founder | Electe

CEO of Electe, I help SMEs make data-driven decisions. I write about artificial intelligence in business.

Most popular
Sign up for the latest news

Receive weekly news and insights in your
inbox. Don't miss it!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.