Ethics, life and AI

Ethics, life and AI

Home
Notes
Archive
About

Trade-offs aren’t Just for Algorithms

What happens when optimization becomes infrastructure, and how to see what's being sacrificed

Marcela Distefano's avatar
Jessica Drapluk's avatar
Sam Illingworth's avatar
Raghav Mehra's avatar
Marcela Distefano, Jessica Drapluk, Sam Illingworth, and Raghav Mehra
Jan 14, 2026
Cross-posted by Ethics, life and AI
"This is a collaborative essay exploring how trade-offs show up not only in algorithms, but in human judgment, healthcare, and care itself. As AI increasingly shapes decisions, the piece asks what gets accelerated—and what gets quietly lost—when efficiency replaces context and relational understanding."
- Jessica Drapluk
Made with Nano Banana

TL;DR: Algorithms don't create trade-offs—they automate the ones you already make, then remove your ability to argue. Education labels students permanently. Finance optimizes for certainty over access. Health prioritizes efficiency over context. Public systems? You can't opt out. The question is: who chose, and who pays?

Sometimes the most important conversations happen at the intersections. This piece emerged from a shared frustration: too much talk about AI ethics stays abstract, while the people experiencing algorithmic decisions every day -students, patients, employees- are left translating theory into their own reality.

So we decided to pool our expertise and do the translation ourselves. Sam Illingworth brings a practice of slowing down AI use to notice what we’re trading away. Raghav Mehra unpacks how financial algorithms redistribute access rather than democratize it, Jessica Drapluk examines healthcare systems where optimization meets human vulnerability. And my work focuses on public administration, where algorithms operate under legal mandate and citizens have no exit door.

Each of us works in a different domain, but we kept finding the same uncomfortable pattern: the trade-offs that algorithms “solve” were never problems to be solved, they were ongoing negotiations. Automation doesn’t eliminate those negotiations; it just removes your seat at the table.

This collaboration is our attempt to bring these conversations out of our respective silos and into a shared space. As a community of practitioners, researchers, and people navigating these systems daily, we believe these concepts are too important to stay locked in academic papers or policy documents. They need to be accessible, concrete, and connected to lived experience.


And what is a trade off?

People talk about “trade-offs” as if they were a flaw of AI: accuracy vs. fairness, privacy vs. personalization, fraud detection vs. friction. But the plot twist is more uncomfortable: those trade-offs already rule your day, except you call them “priorities,” “doing what I can,” or “I’m overwhelmed.” When an algorithmic system decides for you, all it does is formalize (and amplify) that constant negotiation.

A trade-off is a compromise: to improve one thing, you inevitably give up something else. In daily life, it usually shows up as small decisions (sleeping vs. finishing “one last thing”), but also as structures: how much actual time is left for caregiving, working, resting, or simply existing.

AI ethics enters the picture when that compromise stops being personal and becomes infrastructure: a score, a rule, a model, a form, or a platform that decides which value to prioritize “by default.” And that’s where the key question arises: who chose that trade-off, with what data, and who pays the price when things go wrong?

According to Sam automation does not eliminate trade-offs; it hides them in the code. When we delegate a choice to an AI, we are not making a more efficient decision, we are abdicating the responsibility of negotiation. Efficiency is often a mask for a lack of care. A system that prioritises speed over deliberation inevitably treats humans as data points to be processed rather than individuals with context. By removing the ‘friction’ of human decision-making, we remove the opportunity for empathy and situational justice.


Trade-offs Before Algorithms

These negotiations operate across every scale of daily life. The OECD found that 92% of households turn off lights to save energy, but only 54% of drivers would reduce car use even if public transport improved -convenience beats environmental concern when infrastructure is lacking. The Bureau of Labor Statistics reports that adults in households with young children lose approximately 2.4 hours of daily leisure compared to childless households-time is zero-sum, and caregiving extracts its cost from somewhere.​

Digital systems make this pattern explicit. Research on “social login” (signing in with Google/Facebook) shows users frequently accept data exchange for convenience, even when analysis suggests the privacy cost can outweigh the benefit. The trade-off persists because it’s time-lagged: you get in fast (immediate), but more data circulates across platforms for profiling and monetization (deferred and diffuse). Many AI-driven systems for segmentation and personalization lean exactly on this pattern: capturing data now to extract value later, even when users don’t perceive the cost at the moment of consent.


Analysis (From Daily Life to AI)

In everyday life, trade-offs are negotiated using heuristics: “this today, that tomorrow,” “the cheap option,” “the fast way,” “whatever lets me sleep.” When a decision is programmed, that negotiation becomes an objective function: a set of metrics the system optimizes and a set of costs left off the board.

Four domains illustrate where this becomes politically sensitive:

Education by Sam :

Predictive systems in schools and universities prioritise institutional efficiency over student dignity. If a system flags students for “dropout risk” to allocate resources, the trade-off is clear: the institution gains an early warning, but the student gains a permanent label. This creates a feedback loop where educators may subconsciously expect less from those flagged. Data-driven labels are difficult to erase and rarely account for the personal resilience that numbers cannot track.

This pattern -where optimization creates invisible labels- isn’t limited to institutional systems. The same trade-offs operate in our personal use of AI tools. Sam, whose Slow AI practice focuses on deliberate engagement with AI, offers this reflection tool:

Before using the prompt below, pause and treat this as a short impact check rather than a thought exercise. The aim is to make trade-offs visible. Notice what you are optimising for when you use AI quickly and smoothly, and what you are willing to give up to get that benefit. Keep the focus narrow. Think about one or two everyday uses rather than your whole system. Do not look for a balanced answer. Look for what is being prioritised, what is being constrained, and where small limits or frictions might change the outcome.

Prompt: You are a Socratic coach. Your task is to help me think clearly about the trade-off between privacy and personalisation in my everyday AI use. Ask a short sequence of simple, direct questions in one paragraph only. Write in the second person and address me directly. Use a calm, sceptical tone. Ask which AI tools I use most often, what personal information or context I share to make them work smoothly, what I gain from that personalisation, what control or privacy I give up in return, and whether that convenience affects how much I trust or rely on the output. End by asking what one small piece of friction I could reintroduce now, and what I would lose and gain by doing so.

The purpose isn’t to abandon these tools, but to make the exchange visible before it becomes automatic.

Suscribe to Sam


Finance: When Optimization Becomes Exclusion by Raghav:

In 2019, if you’d asked how many Americans were “credit invisible,” the answer seemed clear: 26 million adults had no credit history. By 2025, the Consumer Financial Protection Bureau issued a quiet correction: closer to 7 million -a 73% overcount. The original figure had lumped together people with no records alongside those with “stale” files. This isn’t just statistical housekeeping -it reveals how even measuring financial exclusion involves hidden trade-offs in what counts as “invisible.”

The trade-off: certainty vs. access

Traditional credit scoring excludes recent immigrants, young adults, and cash-reliant consumers not by design, but as an acceptable cost of predicting default risk with maximum certainty. Modern machine learning models promise better: by analyzing cash flow and rent payments instead of just loans, they can score millions more people. A government pilot helped 110,000 “unscorable” Americans build credit scores around 680 within a year. The technology works.

The trade-off: accuracy vs. explainability

But these models are nearly impossible to explain. A traditional FICO score is transparent—if you’re denied, you know what to fix. A machine learning model might weigh 10,000 variables in ways even its designers can’t articulate. The law says banks must tell you why they denied you, not just “the computer said no.” Yet lenders struggle to translate algorithmic verdicts into legally required explanations.

The trade-off: institutional security vs. individual friction

Fraud detection follows the same pattern. Picture this: your card gets declined at a gas station two states over—not because you lack funds, but because an algorithm flagged “unusual location” as suspicious. In 2024, nearly one in three fraud alerts were false positives. The algorithm protects the bank’s bottom line; you absorb the embarrassment and wasted time. Research shows 65% of falsely declined customers reduce future spending with that merchant, and 27% never return.

These trade-offs aren’t inevitable—they’re design choices. The question isn’t whether they exist, but who chose them, who benefits, and who pays the cost.

Suscribe to Raghav


Public administration by Marcela:

In tax or welfare systems, the trade-off between detection efficiency and false positives operates under a different constraint: citizens cannot opt out. When an algorithm flags someone for audit or benefit verification, there’s no alternative provider to switch to—the system’s optimization becomes their mandatory reality. This raises distinct questions about due process, proportionality, and the burden of proof that private sector trade-offs don’t face.

If education and finance reveal how labels and scores follow people, healthcare shows what happens when those labels meet the body.


Health Sector: When Optimization Meets Vulnerability by Jessica

Healthcare is a domain where optimization meets people at their most vulnerable.

Patients don’t enter systems as abstract users; they arrive tired, frightened, in pain, or already overwhelmed. Yet many healthcare algorithms are designed as if decisions happen under neutral conditions, detached from the body that must live with the outcome.

Clinical trade-offs often appear as efficiency versus attentiveness. Algorithms are introduced to standardize care, reduce wait times, flag risk, and allocate limited resources. These goals are reasonable. The problem arises when optimization quietly replaces judgment, and speed is mistaken for safety.

Consider triage in an emergency department. A patient presents with vague symptoms: fatigue, chest tightness, a sense that “something isn’t right.” Their vital signs are stable. The algorithm assigns a low-acuity score. On paper, the system works. In practice, the patient waits. What the data can’t register is tone, hesitation, or fear—the quiet cues that prompt a clinician to pause and ask a second question. Throughput is optimized. Context is deferred.

This pattern repeats across healthcare systems. Predictive models label patients as “high risk,” “noncompliant,” or “low likelihood of adherence.” These labels improve system certainty, but they persist beyond the moment they were generated. Once attached, they shape future encounters: how much time is offered, how symptoms are interpreted, how seriously concerns are taken. The system gains efficiency; the patient absorbs the cost.

Many of these trade-offs are, at their core, nervous system trade-offs. Under pressure, healthcare systems default to what is measurable, defensible, and fast. But bodies experience care differently. They register tone, timing, and whether someone feels seen. When systems move too quickly, patients often respond with anxiety, mistrust, or withdrawal — responses then misread as behavioral problems rather than predictable reactions to an overstimulating environment.

AI doesn’t create this dynamic, but it can solidify it. When decisions are automated, trade-offs become harder to interrupt. A clinician may sense that something is off, but the score still shapes the encounter. A patient may feel unsafe, but the system has already decided what matters.

Healthcare makes one thing clear: when friction is removed entirely, care often disappears with it. In medicine, friction can be the pause that allows a concern to surface or a story to change the course of treatment. If healthcare decisions are treated as problems to be solved rather than negotiations to be held, systems may function smoothly while patients quietly absorb the consequences.

Suscribe to Jessica


Framework for System Designers

When a trade-off is automated, it becomes faster, more consistent... and harder to argue with. Neutrality is often just the fancy name for a hidden priority. Three strategies can make these choices visible:

Make priorities explicit: Document what is being optimized and what is being sacrificed (and why) in language understandable to non-specialists. Don’t optimize a single metric; impose “floors” or “ceilings” (e.g., maximum tolerable false positives) before trying to improve everything else.

Test for differential harm: Conduct impact assessments by group, adverse scenarios, and second-order effects (who loses time, who loses money, who loses access).

Right to explanation and appeal: If a decision affects you, there should be a human path to challenge it, especially when the same person always pays the price of the trade-off.


Conclusion

Trade-offs aren’t born in algorithms: algorithms are where trade-offs become irrevocable, repeatable, and scalable. Sam’s reflection prompts, Raghav’s financial exclusion patterns, Marcela’s Public sector due process and Jessica’s health systems analysis all point to the same uncomfortable truth: the ethical question isn’t “are there trade-offs?” but rather “who chooses them, who understands them, and who has to eat the cost?”


Thanks for reading Ethics, life and AI! This post is public so feel free to share it.

Share

If you want more reading:

OECD (2023), How Green is Household Behaviour?: Sustainable Choices in a Time of Interlocking Crises (PDF):

Daily time use in households with young children in 2024

Trading off convenience and privacy in social login (ScienceDirect):

Data Slots: trade-offs between privacy concerns and benefits of data-driven solutions (Nature / HSSC):

https://infobytes.orrick.com/2025-06-27/cfpb-issues-correction-of-credit-invisibles-estim

https://www.federalreserve.gov/publications/2025-october-consumer-community-context.htm

https://explore.forter.com/2024-trust-premium-report/p/1

https://themortgagepoint.com/2025/07/09/cfpb-updates-data-on-credit-invisib

https://www.consumerfinance.gov/data-research/research-reports/

https://www.occ.gov/topics/consumers-and-communities/project-reach/project-reach-fact-sheet.pdf

Jessica Drapluk's avatar
A guest post by
Jessica Drapluk
Who am I? The Best Writer Alive 💥 Nurse Practitioner 👩🏻‍⚕️ Money Flow Investor & Trader 📈 Professional Writer & Premium AI Ghostwriter 🤝
Subscribe to Jessica
Sam Illingworth's avatar
A guest post by
Sam Illingworth
Professor & poet in Edinburgh who writes Slow AI, to help reflect and stop accelerating into the void. I reply to every comment.
Subscribe to Sam
Raghav Mehra's avatar
A guest post by
Raghav Mehra
Fintech & strategy enthusiast | Ex-Quant Trader | Writing on finance, tech, AI, and some personal musings - how we pay, invest & build
Subscribe to Raghav

No posts

© 2026 Marcela Distefano · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture