The Faceless Hidden Persuaders: Who Decides What You Decide?
Subconscious persuasion at scale -and how to stop it
This post is a collaboration between Jose Antonio Morales, philosopher and researcher on fear, identity, and decision-making, and me. He brings the cultural and psychological depth. I bring the technical and governance angle.
Together, we look at a problem Vance Packard saw coming 70 years ago, and how it looks when the persuaders no longer have a human face.
The Hidden Persuaders -1957 to now
In 1957, Vance Packard published a book that landed in a society already on edge. These were the Cold War years, and the West was obsessed with the idea of communist “brainwashing”, that external, hostile force capable of overriding one’s free will. But Packard’s provocation was to argue that the real danger didn’t come from a foreign power, but from the gleaming office buildings of Madison Avenue. His thesis in The Hidden Persuaders was that a cabal of psychologists and advertisers was using the very tools of psychiatry to manipulate what he called the “fabric of the human mind,” all to sell soap and political candidates.
Packard predicted that by the 21st century, his warnings about subconscious persuasion would seem amusingly quaint. He was right, though not because the problem disappeared. Today, we aren’t being targeted by an ad man with a couch; we face language models that, fed on our digital footprint, are 80% more persuasive than any human being. We are living out Packard’s prophecy with a radical twist: we are facing persuaders that no longer have a human face.
The difference between those fifties and our era is intent.
Packard was denouncing techniques designed to manipulate the masses. We, on the other hand, move within a reality of extreme personalization where algorithms learn our weaknesses in real-time. The unsettling part is that these systems operate as “black boxes.” As research from UNU Macau and others warns the most powerful models currently function without providing clear explanations, creating what academics call an “accountability gap.”
Three mechanisms of hidden persuasion in public systems
Even something as seemingly neutral as a default setting is a form of choice architecture. Research shows that defaults can influence behavior in 90% of cases. We don’t need someone to shout a message at us; it is enough to design the path of least resistance. Someone, somewhere, decided what is “standard” based on goals that rarely align with our autonomy.
When that “someone” is a government procurement officer and the “path” is a citizen’s access to a basic right, the stakes change.
Here are three mechanisms that appear systematically in public systems:
Hostile defaults: the system enrolls you; opting out requires 4 steps. Example: pre-consented data sharing in benefits portals.
Obstruction: the appeal exists but is buried. Example: “contest this decision” link on page 3, no search function.
Confirmshaming: rejecting = accepting consequences framed as threat. Example: “I don’t want to protect my rights” as the decline option.
We see it in systems like India’s Aadhaar, where the identity of millions ceases to be a fact and becomes a statistical probability. When access to a basic right depends on a biometric matching algorithm deciding if you are who you claim to be, human dignity becomes secondary to data. It is the design of a reality where the meandering paths of human life (a scar, a mistake, a change) are punished by the rigidity of calculation.
This invisible persuasion carries over into everyday life. We are transitioning from the role of consumers to that of “algorithmic citizens.” This governance transforms the person into a profile, reducing our unpredictability, which is the bedrock of freedom, to a series of predictable responses. The system knows patterns of our behavior that we ourselves are unaware of, creating an asymmetry of power that Packard could barely have imagined.
Current governance, with frameworks like the European Union’s AI Act, attempts to set boundaries, but the reality is that regulating such sophisticated forms of influence is a Herculean task. Automated systems are efficient because they eliminate human intervention, but in doing so, they also eliminate our capacity for agency over processes that we can neither see nor challenge.
The question we must ask ourselves is not just whether artificial intelligence is useful, but: who decided the terms of that utility, and why have we accepted being locked into this architecture? At some point, we stopped asking if the recommended option is actually our own will or simply the easiest exit designed by a system that knows us too well, because we ourselves fed it with our fear of the void.
Jose’s Point of View
We can see it more clearly than ever. And yet, clarity alone doesn’t protect us.
I have been working on the topic of fear for more than a decade, and what I notice is that fear tactics are everywhere:
flash sales, job applications, social media, news, politics, fashion.
The architecture of modern life is soaked in them. But the more interesting question isn’t where the manipulation comes from. It’s why we are so available to it.
We are afraid of getting the shorter stick. We know the world is unfair, and we fear we won’t deserve better. We are afraid of not progressing, of being ignored, of failing the people we love. Someone who can make us feel that fear, even artificially, gains immediate access to our decisions.
And the cruelest part: acting on fabricated fear feels like agency. The infused emotion narrows our options while convincing us we are doing something.
This is how bubbles form.
They are psychological spaces where our personalities and identities live. We come to believe we are defined by them, and then we fear losing them, because that would mean losing part of who we think we are.
Have you noticed how most democratic elections present a bad candidate and a worse one? We feel there is no alternative other than choosing the lesser evil. In my country, Peru, we keep electing criminals to Congress and the Presidency. We know the system needs to change. But instead of making the necessary adjustments, after the always disappointing election results, we return to normal Monday mode and deprioritize what is essential. The urgent always wins over the important.
The exit isn’t about more knowledge, strategies or systems. The exit is a softer sense of self.
If our identities become more flexible -less fused with external conditions- we can tolerate adverse circumstances with less resistance.
I think of it this way: if I’m a Volvo kind of person, I resist any suggestion of a Citroën. But if being a Volvo person stops mattering quite so much to me, the Citroën becomes an option. That’s a bubble bursting, without drama.
When we develop the capacity to burst our own bubbles, manipulation loses its grip. We feel less fear, less urgency, less need to defend an identity that was never really ours to begin with. And we start to ask the obvious question: who cares about the 50% sale when you know they’ll run it again next week?
If you can’t stop rushing toward the future, if you can’t stop doing what you’re doing even when it hurts -it means you can’t yet see that you, or someone else, is manipulating you. You’ve convinced yourself that what you’re doing is the only viable option right now. That’s your bubble.
But if you can pause, and sit with the fact that the future is uncertain anyway, and realize you could do something different, even if some of your desires go unmet, you might find yourself feeling better. Less fearful. More flexible. That pause is how you burst bubbles, and reclaim the space and energy you’ve been spending on protecting, defending, and numbing yourself.
We have all been in both mindsets. We all know how to be flexible, and we all know how to be in a rush. That’s part of our nature. But switching between the two consciously -that’s a superpower.
And it’s a skill we can all learn and strengthen.
The question we can’t stop asking:
If someone -or something- can make us feel fear, even artificial fear, they gain immediate access to our decisions.
The cruelest part: acting on that fear feels like agency.
A task for all of us: we stop asking why we are so available to it.
We ask: who designed the path of least resistance in this system? And how do we change it?
Bibliography
Gubelmann, M. (2024). Responsibility gaps, LLMs & organisations: Many agents, many levels, and many interactions. Ethics and Information Technology, 26(4).
Packard, V. (1957). The hidden persuaders. David McKay.





This was such a thoughtful read. The idea that so many of our choices are shaped by invisible persuasion is both fascinating and slightly unsettling. It really makes me want to pause more before accepting things at face value.
This piece hits home, especially the idea that bubbles are psychological spaces where our identities live, and that we fear losing them. I see this every day in how professionals relate to AI. The threat isn't the technology. It's what it does to the identity they've built around their expertise. The idea of a softer sense of self resonates.
And it makes me think there might be another layer alongside it: a wider sense of self. When we add new lenses, like a complexity lens, we expand our capacity to see from more positions. The identity doesn't just soften. It becomes more capable of holding contradiction without breaking.
Does it resonate as well?