The Dangers Hidden in Every Answer
Each of the five positions in this chapter carries within it the seeds of serious harm — and understanding these consequences is essential to understanding why none of the proposed solutions has commanded anything close to a national consensus.
Elena’s vision of publicly owned platforms sounds appealing in the abstract, but the historical track record of government-run media is not encouraging. State-owned outlets, even in democracies, tend toward blandness, bureaucratic inertia, and political capture by whichever party controls the government. A publicly owned social media platform would face enormous pressure to suppress content embarrassing to the administration in power — and the same progressives who cheer when a Democratic administration moderates right-wing content would be horrified when a Republican administration used the same tools against progressive speech. The nationalization of data raises privacy concerns that mirror those of corporate collection. Do we really want the government to have the same comprehensive picture of our online behavior that Facebook currently has? The surveillance state and the surveillance corporation are both threats, and replacing one with the other is not progress.
Marcus’s vision of utility-style regulation carries the classic risk of capture, ossification, and unintended consequences. Utility regulation works for industries with stable technologies and predictable demand — electricity, water, gas — but the tech industry is none of these things. Imposing it on an industry that changes fundamentally every five to ten years risks locking in today’s market structure at the expense of innovations not yet imagined. More fundamentally, his confidence in content moderation underestimates the difficulty at scale and the value judgments embedded in any moderation system. “Misinformation” is not an objective category. The experts who were certain COVID-19 could not have originated in a laboratory, that masks were ineffective, or that natural immunity was inferior to vaccine-induced immunity were wrong on each point — but content moderation systems, relying on expert consensus, suppressed dissenting views that turned out to be closer to the truth.
Sarah’s centrism may be insufficiently responsive to the urgency of the harms. If social media is genuinely damaging children’s mental health — and the evidence increasingly says it is — then cautious, incremental reform may be a form of complicity. The centrist’s reverence for innovation can serve as cover for defending the status quo. Not all innovation is good. The engagement-maximizing algorithms and dopamine-exploiting design patterns are innovations in the same sense that the cigarette filter was an innovation: they made a harmful product more effective and more addictive.
James’s vision of platform neutrality through Section 230 reform addresses a genuine concern but risks creating worse problems. If platforms that exercise any editorial judgment lose immunity and become liable as publishers, the predictable response is either extreme over-moderation — removing anything that could generate liability — or the complete abandonment of moderation, producing a cesspool of racism, harassment, and illegal content that drives away all but the most hardened users. The middle ground James envisions is far harder than it sounds. “Lawful speech” is not a bright line; defamation, incitement, and harassment all involve context-dependent judgments that platforms would have to make in real time at enormous scale.
Ruth’s vision of common carrier status and outright bans for minors has the virtue of clarity and the vice of inflexibility. Common carrier regulation would require platforms to host all lawful speech, including spam, scams, harassment, and hate speech — which is constitutionally protected in the United States. The user experience would deteriorate rapidly. Platforms that have adopted minimal-moderation policies — Gab, Voat, the early days of Truth Social — tend to attract the worst elements of online culture while repelling ordinary users. As for banning minors outright, the enforcement challenges are formidable: robust age verification requires either government-issued digital identity systems, which raise surveillance concerns, or biometric verification, which raises privacy concerns.
The debate persists because there are no clean solutions — only tradeoffs. Regulate content, and you risk political censorship. Refuse to regulate, and you get harassment and disinformation. Break up the companies, and smaller ones may replicate the same problems. Leave them intact, and their power grows unchecked. Protect children, and you build surveillance infrastructure that threatens everyone’s privacy. Fail to protect children, and you accept documented mental health consequences for developing minds. The technology moves faster than law, faster than social norms, faster than human psychology can adapt. Everyone can see the problem. No one has a solution that does not create new problems at least as serious as the ones it solves. And meanwhile, the algorithms keep running, the data keeps flowing, the children keep scrolling, and the shared reality that democratic self-governance requires continues its slow, accelerating erosion.