What the Algorithms Found in Us
Beneath all the policy arguments about Section 230, antitrust enforcement, and content moderation lies a deeper and more unsettling truth: social media did not create a new kind of human weakness. It found the weaknesses we already had and exploited them with surgical precision.
The human brain developed in small groups of perhaps fifty to one hundred and fifty individuals, where social information was scarce, immediately relevant, and verified by personal experience. We evolved to be acutely sensitive to social signals — the approval and disapproval of our peers, threats to our status, the shifting dynamics of hierarchy. These sensitivities were adaptive. In a small group of hunter-gatherers, paying close attention to who was allied with whom, who was rising and falling, who might be a threat, was essential for survival. The dopamine hit that accompanied new social information was calibrated for an environment in which such information was rare and costly to obtain.
Social media hijacked this system with the precision of a drug dealer who has identified exactly which receptor to target. The infinite scroll, the pull-to-refresh, the variable-ratio reinforcement schedule of likes and comments — these are not accidental design features. They are the product of deliberate engineering by people who studied behavioral psychology and applied its insights to the problem of maximizing engagement. The result is a technology that delivers social information at a volume and velocity the human brain was never designed to handle. It is as if we evolved to process a glass of wine with dinner and were suddenly hooked up to an intravenous drip of pure ethanol. When researchers at Facebook’s own internal labs found that Instagram exacerbated body image issues, depression, and suicidal ideation in teenage girls, they documented what millions of parents already knew. The platform was not reflecting pre-existing vulnerabilities; it was actively exploiting them, because exploiting them was profitable.
Then came the shattering of shared reality — perhaps the most consequential and least discussed casualty. For most of American history, despite deep political disagreements, citizens inhabited roughly the same informational universe. They read the same newspapers, watched the same evening news, and shared a common set of facts — however imperfectly understood — about what was happening in their country. This shared foundation made democratic deliberation possible, even when contentious. You could argue about inflation or crime or foreign policy from a shared set of facts. The algorithmic information environment has shattered this into a million fragments, each tailored to confirm the priors, activate the emotions, and reinforce the tribal identity of its individual consumer. Two neighbors in the same town now inhabit different factual universes — not just different interpretations of the same facts, but genuinely different facts. One lives in a world where climate change is an existential emergency. The other lives in a world where it is a hoax perpetrated by globalist elites. These are not different opinions about the same reality; they are different realities, constructed and maintained by systems optimized for engagement rather than truth. The algorithms show you content that confirms your beliefs because that content generates more engagement. The more you engage, the narrower your diet becomes. The narrower your diet, the more alien the other side appears, which drives you toward content portraying them as stupid, evil, or dangerous. The platforms did not create political polarization — it existed long before Facebook — but they have amplified it, accelerated it, and made it self-reinforcing in ways that are genuinely new. The feeling that the other side is not just wrong but insane, that they inhabit a different reality — this is not a natural feature of political disagreement. It is an artifact of systems designed to maximize engagement by maximizing emotional intensity.
And then there is the cruelest trick of all: the paradox of connection. Social media promised to bring us together, and in a superficial sense it has — we are in constant contact with more people than at any point in human history. But the quality of that contact, its capacity to satisfy the deep human need for intimacy, belonging, and mutual recognition, has proven catastrophically low. Sherry Turkle captured the dynamic in her phrase “alone together.” The reason is not mysterious. The kind of interaction social media provides — the broadcast of curated self-presentations to semi-strangers, the exchange of likes that simulate but do not constitute genuine emotional engagement — does not satisfy the primate brain’s need for face-to-face interaction, physical presence, and deep mutual attention. We evolved to read facial expressions, to detect the micro-movements of eyes that signal attention or deception, to respond to another human body in shared space. A heart emoji is not a substitute for a hug. A comment thread is not a substitute for a conversation. And yet, for an increasing number of people — particularly the young, who have grown up with these platforms as their primary social environment — digital interaction has not supplemented but replaced the embodied social life that human wellbeing requires.
The feeling of powerlessness compounds everything. In theory, every user is a free agent who can leave at any time. In practice, network effects — the fact that everyone you know is on Facebook, that your professional contacts are on LinkedIn, that the community groups you depend on exist only on these platforms — create a form of soft coercion that makes “just log off” roughly as helpful as telling someone in a company town to “just find another job.” The platforms have made themselves essential infrastructure for social, professional, and civic life, and then used that essentiality as leverage to impose terms that users have no practical ability to negotiate or refuse. The terms of service that nobody reads are not a contract between equals; they are a take-it-or-leave-it imposition by entities with overwhelming market power on individuals with no meaningful alternative.
And beneath all of it — beneath the policy debates and the platform wars and the algorithmic manipulation — there is the terror of parents. It cuts across every political and demographic line. The progressive parent in Brooklyn and the conservative parent in rural Texas share the same helpless dread as they watch their children disappear into screens, the same fury at companies that design products to exploit the developmental vulnerabilities of adolescent brains, the same guilt about their inability to compete with trillion-dollar corporations for their children’s attention. They know — because the research is overwhelming and because they can see it with their own eyes — that something is wrong. Their children are more anxious, more depressed, more fragile, more isolated, more cruel to each other, and more disconnected from the physical world than any previous generation. The parent’s terror is not a policy position; it is a primal response to a perceived threat to one’s offspring, and it has the potential to be the most powerful political force in the entire Big Tech debate — if it can be organized and directed toward specific, achievable reforms rather than diffused into generalized anxiety and mutual recrimination.
What makes all of this so tragic is that social media became dominant not because it was forced on people but because it offered something people genuinely wanted. Connection. Belonging. The sense of being part of something larger. The small-town social fabric that once provided these things has frayed. The civic institutions — churches, lodges, community organizations, the bowling leagues Robert Putnam mourned — have continued their decades-long decline. The workplace has become precarious, remote, and transactional. Into this void stepped social media, offering a simulation of community that was accessible, immediate, and free. The simulation turned out to be a hologram — it looked right from certain angles but dissolved when you tried to lean on it. By the time people discovered this, they had already abandoned or neglected the real-world social structures the simulation was supposed to replace. The challenge now is not simply to regulate the platforms, or even to break them up, but to rebuild the social infrastructure — the physical spaces, the face-to-face institutions, the embodied communities — that can provide what social media promised but cannot deliver.