logo

Crowdly

Browser

Add to Chrome

You're a CS student working on your machine learning final project. Your team h...

✅ The verified answer to this question is available below. Our community-reviewed solutions help you understand the material better.

You're a CS student working on your machine learning final project. Your team has built an AI system to help a local bank automate loan approval decisions. You trained your model on 10 years of historical loan data and achieved 92% accuracy in predicting successful loan repayments. Your professor and teammates are impressed with the technical performance.

However, when you analyze the results more carefully, you discover some troubling patterns. The AI consistently approves loans for applicants from wealthy neighborhoods at higher rates than equally qualified applicants from lower-income areas. It also shows bias against applicants with non-English names, even when their financial qualifications are identical.

Your initial reaction: "But we just gave the AI data and let it learn patterns. We didn't program it to be biased. How can a neutral algorithm be unfair?"

The challenge: This experience is forcing you to question your assumptions about AI being naturally objective and ethical.

Your teammate argues: "Algorithms are just math and code. They can't be biased because they don't have feelings or prejudices like humans do." What's wrong with this reasoning?

0%
0%
0%
0%
More questions like this

Want instant access to all verified answers on learning.monash.edu?

Get Unlimited Answers To Exam Questions - Install Crowdly Extension Now!

Browser

Add to Chrome