Why the algorithm dislikes Generation X
and why it is mistaken
Image made with Nano Banana
If you were born between 1965 and 1980, you are part of the “sandwich generation.” You are probably caring for your teenage children and your aging parents at the same time. You have decades of experience, you’ve survived several economic crises, and you know how to fix a printer without calling tech support (well, not always). You should be at the peak of your career. But if you’ve tried to change jobs recently, you might have felt like you were hitting an invisible wall.
It’s not you. It’s the code.
There is a silent battle in the corporate world, and the battlefield is the Human Resources servers. The enemy is not a recruiter with conscious biases (although biases certainly exist), but a mathematical “black box” that has learned to discard you before any human even reads your name.
We already saw what happened to Derek Mobley in a previous post. In case you didn’t read it, Mobley is an experienced professional; he has degrees, skills, and a desire to work. He applied for over 100 positions at companies that use Workday software to manage their selection processes. He was rejected from all of them. Often, the rejection arrived in a matter of minutes, even at 2 in the morning. No human reviews resumes at that speed or at those hours. Mobley sued, alleging that Workday’s AI systematically discriminates against candidates over 40, African Americans, and those with disabilities. What makes this case historic is that a federal judge has allowed it to proceed as a class action lawsuit under the theory of “disparate impact.” This means there is no need to prove that Workday wanted to discriminate, only that their tool ended up discriminating.
This case has opened Pandora’s box. If the algorithm has a bias, we aren’t talking about an isolated error; we are talking about exclusion on an industrial scale.
The Amplification mechanism: How AI learns to discriminate
Biased Data: The Seed of Injustice
The fundamental technical cause of algorithmic ageism is the underrepresentation of older age groups in training datasets. An AI model can only be as good as the data it is trained on, and the exclusion of specific demographic cohorts severely compromises its fairness and accuracy.
This problem is evident in widespread technologies like facial recognition. As documented in the study “AI ageism: a critical roadmap,” many databases used to train these systems impose arbitrary age cutoffs. For example, the FG-NET database contains images of subjects only up to age 69, and MORPH limits ages to a maximum of 77 (this may have changed by now, let me know if you have any updates).
This underrepresentation is not accidental, but the result of a data scarcity feedback loop. The lower digital presence of older adults, a product of the digital divide, generates less data about them. In turn, developers, faced with this scarcity, tend to exclude them from their models. This technical exclusion perpetuates the group’s invisibility, limiting the generation of new data and reinforcing the cycle of marginalization.
Algorithmic Design: Indirect Discrimination via ‘Proxies’
Even when age is not an explicit variable (”reject if age > 50”), algorithms can learn to discriminate through “proxy variables”: data points that, while not being age itself, are highly correlated with it. Common examples include college graduation year, seniority in a role, or proficiency in specific technologies (like older programming languages)—for example, “Lotus proficiency.” An extreme example, I know, but I’m Gen X.
The hiring realm offers a clear example. An algorithm can be programmed to prioritize profiles containing terms like “energetic” or “digital native” over others like “experienced” or “extensive track record.” Although the intention may not be to discriminate, the result is a systematic bias toward younger candidates.
This phenomenon represents a trivialization of experience in algorithmic coding. The system values metrics associated with the early stages of a career, such as speed, “hunger,” and low salary costs(?) over depth, stability, and institutional knowledge. The result is the systematic devaluation of an invaluable asset, transforming a cultural bias into an automated exclusion criterion.
Sector Impact: The Real Consequences of Algorithmic Ageism
The abstract problem of algorithmic bias materializes into concrete, quantifiable damages. From labor exclusion to health risks, AI is turning a social prejudice into a structural barrier.
Employment: The systematic devaluation of experience
The job market is the bloodiest arena. Bias reaches extreme levels in the tech industry itself. Research by Generation and the SCAN Foundation revealed chilling figures: while an overwhelming 90% of hiring managers would consider candidates under 35 for AI roles, only a meager 32% would give the same opportunity to those over 60.
This evidences a systemic preference for youth over experience. This trend not only excludes experienced workers from a booming sector but creates a generational blind spot in the development of the technology itself. The teams designing AI lack age diversity (the median age at big tech companies hovers around 28-29), ensuring the resulting solutions are less inclusive.
A real case illustrating this is iTutorGroup. The EEOC (U.S. Equal Employment Opportunity Commission) sued this company because its software was programmed to automatically reject women over 55 and men over 60. They had to pay $365,000 after the “error” was discovered when a rejected candidate reapplied with a fake birth date and got an interview immediately.
Health and Finance: A risk to safety and the wallet
In the healthcare sector, algorithmic ageism isn’t just unfair; it’s dangerous. The World Health Organization (WHO) has expressed concern about how AI can exacerbate ageism in medicine. A diagnostic system trained on data that does not adequately represent older adults can lead to misdiagnoses or inadequate treatments for this group.
In financial services, risk scoring systems can unfairly restrict access to credit or insurance based on proxies. And in public discourse, algorithms like Meta’s (Facebook) have been accused of not showing job ads to older people, creating an invisible segregation where you don’t even find out about existing opportunities.
The Vicious circle: The algorithmic gap and bias Internalization
Sector impacts converge to create a perfect trap that perpetuates itself.
The Algorithmic Gap: Invisible to the system
The “algorithmic gap” is the dangerous evolution of the digital divide. It’s not just about having internet access, but about how the algorithms governing essential services “see” (or ignore) older adults. Being underrepresented in data, their needs are invisible to AI-optimized planning for cities, transportation, and services. If you aren’t in the data, you don’t exist to the AI.
Psychological Damage: Self-Inflicted Ageism
The most insidious impact is when external bias becomes an internal inhibitor. When older people internalize the image that they are “obsolete” or “slow,” they begin to exclude themselves. They convince themselves they are “too old” to learn new AI tools, reducing their interaction with them.
This lack of interaction generates less data about them, reinforcing the algorithm’s bias, closing the circle. However, the data shows the opposite: 15% of those over 45 who already use AI at work tend to be autodidacts and “superusers.” The capacity is there; the barrier is the narrative.
Conclusion: The Revenge of the “Analogs”
Here is the grand final irony. In a world flooded with AI-generated content, the most valuable skills are precisely those AI does not possess: judgment, context, emotional intelligence, and critical thinking.
Generative AI can write an email in seconds, but it doesn’t know if sending it is a good strategic idea or political suicide. It can analyze data, but it cannot “read the room” in a tense negotiation. Those are “analog” skills that Generation X has honed for decades.
Legislation is starting to react. New York City’s Local Law 144 already requires bias audits for automated hiring tools. Europe is moving forward with its AI Act. But the real solution won’t come from the courts alone.
It will come when companies realize they are using immature technology to filter out their most mature talent. Generation X doesn’t need to be “protected” from AI; it needs to be handed the keys to drive it. Because, at the end of the day, the most advanced technology in the world still needs an adult in the room.
Just think, if the scenario from El Eternauta (a famous Argentine comic, available as a miniseries on Netflix) comes to pass, we’ll be able to say: The old stuff works, Juan.
Sources and Recommended Reading
1. The Legal Battlefield: Real Cases
The Workday Case (Mobley v. Workday, Inc.): Analysis of Judge Rita Lin’s ruling allowing the class action under the “disparate impact” theory.
(https://www.fisherphillips.com/en/news-insights/discrimination-lawsuit-over-workdays-ai-hiring-tools-can-proceed-as-class-action-6-things.html)
The iTutorGroup Case (Explicit Discrimination): Details on the $365,000 settlement with the EEOC regarding software that automatically rejected candidates by age.
(https://www.programaticaly.com/portada/demanda-prejuicios-juicio-contratacion-mediante-ia)
AARP vs. Meta (Infrastructure Bias): The AARP Foundation’s legal action against job ad delivery algorithms.
(https://press.aarp.org/2023-12-19-AARP-Foundation-Joins-Class-Action-Charge-Claiming-Metas-Job-Ad-Delivery-Algorithm-Discriminates-Against-Older-Workers-and-Women-Workers)
2. Hard Data: Generation X and the Labor Market
“Age-Proofing AI” Report: Joint research by Generation and The SCAN Foundation revealing that 90% of managers prefer candidates under 35 for AI roles.
(https://www.generation.org/news/age-proofing-ai-new-research-from-generation/)
(https://www.thescanfoundation.org/es/recurso/ia-a-prueba-de-edad-que-permite-que-una-fuerza-laboral-intergeneracional-se-beneficie-de-la-ia/)
3. The Technical “Black Box”: Databases and Theory
The Concept of “AI Ageism”: Justyna Stypinska’s fundamental academic paper defining how technology renders old age invisible.
(https://pmc.ncbi.nlm.nih.gov/articles/PMC9527733/)
MORPH Database: The massive UNCW database derived from mugshots.
(https://uncw.edu/research/innovation/commercialization/technology-portfolio/morph.html)
4. Regulation and Future
NYC Local Law 144: The first legislation mandating bias audits for automated employment decision tools (AEDT).
(https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page)
Compliance Analysis:
(https://www.deloitte.com/us/en/services/audit-assurance/articles/nyc-local-law-144-algorithmic-bias.html)
Note: NotebookLM was used for the organization and reading of the bibliography (what a tool for good!!!).



Thank you Marcela for this excellent post. I totally agree that many companies are far too keen to get rid of decades worth of experience that could and should help AI systems and how they are implemented. Seems such a destructive oversight. Also, thanks for providing all the sources and extra reading. 🙏
You're absolutely right Marcela. This is one of the most controversial and difficult aspects of AI to grasp: how bias gets embedded at every stage.
What makes it even more complex is that bias shows up in so many different forms.
Some examples can be:
- Gender bias in language models: AI trained on historical text often associates "doctor" with male pronouns and "nurse" with female ones, perpetuating stereotypes in supposedly neutral tools.
- Geographic bias in image recognition: Facial recognition systems perform significantly worse on faces from underrepresented regions, simply because training data was primarily collected in Western countries.
- Socioeconomic bias in credit scoring: Algorithms trained on historical lending data can deny loans to people from certain neighborhoods, not because of their actual creditworthiness, but because the model learned patterns from past discriminatory practices.
The challenge isn't just fixing one type of bias, it's recognizing that AI reflects all of our societal inequalities back at us, amplified and automated.
Really important work. Thank you for writing this.