Alexa, Fix This! Let’s Talk AI Biases

Flat World Partners
5 min readJan 7, 2025

--

AI can contain Siri-ously flawed data and biases — affecting hiring, credit, and justice. Let’s uncover the flaws and explore solutions.

About | Mission | Blog

These days, it feels like AI is everywhere — whether it’s a new app, virtual assistant, or some shiny gadget promising to “revolutionize” our lives. And for reasons no one can seem to explain, many of these tools are graced with female names. Nothing says technological advancement like reinforcing 1950s gender roles, right?

Call me old-fashioned, but I wasn’t thrilled about inviting another “helpful woman” into my home — especially one designed for domestic tasks. I even forbid my then-boyfriend, now-husband, from buying one. Of course, I eventually caved. My big rebellion? Switching Siri’s voice to a man’s. What can I say? I’m no revolutionary. But the deeper I dig, the more I think we might need one. The evidence is clear: the AI we’re growing increasingly dependent on is deeply flawed, riddled with hidden biases that, if not corrected, will haunt-and hurt — future generations.

Artificial intelligence has transformed industries, promising efficiency and objectivity. Yet, beneath its pearly algorithms lie hidden biases that disproportionately impact women, people of color, and other marginalized groups. These biases — often unintentional but damaging nonetheless — continue to perpetuate systemic inequities in hiring, credit approvals, healthcare, and even policing. The root of these problems lies in the data AI systems are trained on and the humans designing them.

Algorithms learn from existing data sets, but much of this data reflects historical inequities. For example, resume-screening algorithms trained on decades-old hiring practices often reinforce biases against women and minorities, favoring male candidates for leadership roles over equally qualified female candidates (Fisher Phillips). The results are not limited to hiring. Algorithms also rely on proxy variables, such as zip codes, which are often closely tied to race or socioeconomic status, unintentionally perpetuating discrimination in areas like credit scoring and loan approvals (Carnegie Mellon University).

The diversity of the teams creating AI systems — or lack thereof — also plays a critical role in perpetuating bias. Many AI design teams remain predominantly white and male, meaning they are less likely to anticipate or address biases that disproportionately affect other groups (Reuters). This has real consequences — for instance, AI-powered facial recognition systems have been shown to misidentify people of color at a rate ten times higher than white individuals (Harvard JOLT). The lack of diversity is not just a tech industry problem — it’s a societal challenge that seeps into the very foundation of AI technology.

These systemic issues have real-world impacts. AI-driven recruitment tools have been shown to favor male candidates for leadership roles, further entrenching gender disparities in the workplace (The Times). Credit-scoring algorithms have disproportionately denied loans to women and minorities or offered them higher interest rates, reinforcing cycles of financial inequity (Forbes). In healthcare, AI systems have underdiagnosed women and people of color, leaving critical conditions untreated and exacerbating existing health disparities (Journalist’s Resource).

This reliance on flawed technology is alarming in a world striving for greater diversity and equity. AI is only as unbiased as the data and teams behind it. If we continue to embed our unresolved biases into these systems, we risk building a future that mirrors the inequalities of the past and present. We must set higher standards for ethical AI development, prioritize diversity in design teams, and demand greater transparency from the systems that increasingly shape our lives.

AI may not be human, but it is born of humans, flaws and all — and it’s on us to ensure those flaws don’t shape our future.

— Lillian Freiberg, Vice President, Business Development

  • Her (2013) — A poignant exploration of human relationships and emotional attachment to AI, starring Joaquin Phoenix and Scarlett Johansson as the voice of Samantha, a deeply intuitive AI and overall a fantastic film!
  • Ex Machina (2014) — A thought-provoking thriller about the ethical and moral dilemmas of creating sentient AI, showcasing a tense battle of wits between humanity and technology. Get ready for chills.
  • The Matrix (1999) — A groundbreaking sci-fi movie that questions reality and AI’s potential to dominate humanity, and a total classic in general.

This newsletter is intended solely for informational purposes, and should not be construed as investment/trading advice and are not meant to be a solicitation or recommendation to buy, sell, or hold any securities mentioned. Any reproduction or distribution of this document, in whole or in part, or the disclosure of its contents, without the prior written consent of Flat World Partners is prohibited

Thank you for subscribing to our newsletter. Our privacy policy is available at anytime for you to review in order to understand how we protect your personal identifiable information. By subscribing to the newsletter you have consented to our policy

Forwarded this message? Subscribe Here!

Copyright © 2024 Flat World Partners, All rights reserved.

Our mailing address is:
110 E 25th Street, 4th Fl
New York, New York 10011

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

--

--

Flat World Partners
Flat World Partners

No responses yet