4 Comments
User's avatar
Sam Illingworth's avatar

Thank you Marcela for this excellent post. I totally agree that many companies are far too keen to get rid of decades worth of experience that could and should help AI systems and how they are implemented. Seems such a destructive oversight. Also, thanks for providing all the sources and extra reading. ๐Ÿ™

Expand full comment
Stephen D. Carver's avatar

You're absolutely right Marcela. This is one of the most controversial and difficult aspects of AI to grasp: how bias gets embedded at every stage.

What makes it even more complex is that bias shows up in so many different forms.

Some examples can be:

- Gender bias in language models: AI trained on historical text often associates "doctor" with male pronouns and "nurse" with female ones, perpetuating stereotypes in supposedly neutral tools.

- Geographic bias in image recognition: Facial recognition systems perform significantly worse on faces from underrepresented regions, simply because training data was primarily collected in Western countries.

- Socioeconomic bias in credit scoring: Algorithms trained on historical lending data can deny loans to people from certain neighborhoods, not because of their actual creditworthiness, but because the model learned patterns from past discriminatory practices.

The challenge isn't just fixing one type of bias, it's recognizing that AI reflects all of our societal inequalities back at us, amplified and automated.

Really important work. Thank you for writing this.

Expand full comment
Marcela Distefano's avatar

Thanks Stephen for reading and for those examples. We ll keep digging

Expand full comment
Marcela Distefano's avatar

Thanks Sam, it means a lot coming from you!

Expand full comment