7Free

AI தவறு செய்யும் போது

When AI Gets It Wrong

AI is not always right. It can be biased, confused, or just wrong. Understanding AI mistakes is as important as understanding AI successes — especially if you might use AI to make decisions.

10 minutes

Let's Learn

What you will learn today

Understand AI bias, how errors propagate from data to decisions, and what makes AI failures serious.

🔁

A True Story About AI Hiring

Amazon built an AI to screen job applications. It was trained on 10 years of successful hires at Amazon. The AI learned that male applicants were more likely to be hired — because historically, most successful hires had been male. Result: the AI consistently penalised CVs that included the word 'women's' (as in 'women's chess club'). It downrated candidates from all-women's colleges. Amazon shut the project down in 2018. The AI had not been programmed to be sexist — it had learned it from biased historical data. This is AI bias: when a system's outputs are systematically unfair to a group of people.

Where Bias Comes From

AI bias enters the system in several ways: 1. Biased training data: if historical data reflects past discrimination, the AI learns and perpetuates it 2. Missing data: if some groups are underrepresented in training data, the AI works less well for them 3. Wrong target variable: optimising for the wrong thing (e.g. 'past hires' instead of 'job performance') 4. Feedback loops: AI makes biased decisions → those decisions create new data → new data reinforces the bias 5. Human labelling bias: if the humans who labelled training data had biases, those biases are encoded in the labels

  • Biased historical data
  • Missing representation in data
  • Optimising the wrong goal
  • Feedback loops amplifying bias
  • Biased human labelling

📐 Real Cases of AI Failure

These are documented, real cases: • Face recognition: MIT researcher Joy Buolamwini found that leading facial recognition systems had error rates of up to 34% on darker-skinned women vs under 1% on lighter-skinned men • Healthcare AI: An algorithm used in US hospitals to prioritise care assigned lower risk scores to Black patients than equally sick White patients — because it used healthcare cost as a proxy for need (Black patients had historically spent less on healthcare due to economic inequality) • Predictive policing: AI used to predict crime locations was trained on historical arrest data — which reflected areas with more police patrols, not necessarily more crime. This led to over-policing of the same communities • Loan approval: AI systems denied loans more often to ethnic minorities at similar income levels to white applicants

Why AI Errors Matter More Than Calculator Errors

When a calculator gives a wrong answer, you check it and move on. AI errors are different: • Scale: one biased AI system makes decisions for millions of people simultaneously • Invisibility: algorithmic decisions can be opaque — you may not know AI decided something about you • Authority: people trust 'the computer' and often do not question its output • Feedback loops: biased decisions create biased data which trains the next AI A single biased AI loan algorithm can affect more people in a week than a single biased loan officer would in a lifetime.

💡

AI Is Not Neutral Just Because It Is Automated

A common assumption: 'The computer decided, so it must be fair.' This is wrong. Every AI system inherits the values, assumptions, and biases of: • The data it was trained on • The people who chose that data • The objective it was told to optimise • The society that produced the original data AI is a mirror — and sometimes a magnifying glass — of the biases already present in human systems.

⚠️

The Deepfake Problem

AI can now generate realistic fake images, videos, and voices of real people saying and doing things they never said or did. These are called deepfakes. Deepfakes are used for: • Spreading political misinformation • Creating fake evidence in legal cases • Fraud (fake CEO voice calls authorising payments) • Non-consensual intimate imagery of real people As AI-generated media becomes indistinguishable from real media, verifying what is real becomes increasingly important. Always check multiple sources before believing dramatic or surprising footage.

🔍

Misconception: 'If Nobody Meant to Create Bias, It Is Not Really Bias'

Bias does not require intention. Amazon's engineers did not intend to build a sexist AI — but they built one. Bias in AI is about outcomes, not intentions. If an AI system consistently makes worse decisions for one group of people compared to another — regardless of why — that is bias, and it needs to be fixed. This is why AI fairness research is an entire field: testing AI systems rigorously across different demographic groups to find and fix disparate impacts before deployment.

Challenge Round

You Are the AI Reviewer

A school wants to use AI to decide which students should receive extra tutoring support. The AI will be trained on past students' grades and tutoring outcomes. As the reviewer, identify: 1. What biases might creep into this system? 2. How would you test whether it is fair? 3. What safeguards would you require before it is used? 4. Should some decisions always involve a human, not just AI?

When AI Gets It Wrong

AI bias comes from biased data, wrong objectives, and missing representation. Real systems have caused real harm to real people in hiring, healthcare, policing, and loans. AI errors matter more than individual human errors because of scale and invisibility. And bias without intent is still bias — what matters is the outcome.

🌟

You now understand how AI bias works and why it matters — this makes you a critical, thoughtful user and citizen in an AI-shaped world.

Final lesson: you and AI — how to use it wisely, stay safe, and be a creator, not just a consumer.

Key Points

முக்கிய குறிப்புகள்

  • AI learns from data — if the data is biased, the AI is biased
  • Example: if a hiring AI is trained mostly on CVs from one group, it may unfairly reject others
  • AI can be confidently wrong — it doesn't know what it doesn't know
  • Critical thinking applies to AI output just as it applies to anything else
  • Good rule: AI is a helpful tool, not a final judge
G

Glossary

சொல் அகராதி

Bias

சார்பு

Error

தவறு

Trust

நம்பகம்

Critical thinking

திறனாய்வு சிந்தனை

Consequence

விளைவு

Practice Activities

Quizவினாடி வினா

Answer each question to check your understanding.

QQuestion 1 of 3

How did Amazon's AI hiring tool become biased against women?

Fill in the Blanksஇடைவெளி நிரப்புக

Type the missing word and press Check or Enter.

FFill in the blanks

Type the missing word and click Check.

1
When an AI system consistently produces worse outcomes for one group compared to another, this is called AI .
2
AI-generated realistic fake videos of real people are called .
3
When a biased AI makes decisions that create new data, which then reinforces the AI's original bias, this is called a feedback .