14 / 21

The Oldest Power Play in Civilization

The desire to control the flow of information is as old as civilization itself. The Roman Empire understood this instinctively. The cursus publicum — the state postal and courier system established by Augustus — was not merely a logistical convenience but a deliberate instrument of imperial power. By controlling who could send messages, how quickly they traveled, and along which routes, Rome ensured that the emperor’s decrees reached the provinces before rumors or rival narratives could take root. Provincial governors who fell out of favor could find their dispatches delayed, their intelligence networks disrupted, their capacity to organize resistance quietly strangled. The roads themselves, those engineering marvels that still scar the European landscape, were built primarily for the rapid movement of legions, but they served equally well as arteries of information control. Whoever controlled the roads controlled the narrative. This was not a side effect of Roman infrastructure — it was the point.

The printing press shattered this model more thoroughly than any barbarian invasion ever could. When Gutenberg’s movable type made mass production of text economically viable in the mid-fifteenth century, the consequences were as catastrophic for established authority as they were liberating for individual expression. Martin Luther’s Ninety-Five Theses, which might have remained a local academic dispute nailed to a church door in Wittenberg, spread across the German-speaking world within weeks. The Catholic Church, which had maintained something close to an information monopoly for a millennium through its control of monastic scriptoria, suddenly found itself unable to contain dissent. The pamphlet wars of the sixteenth and seventeenth centuries — vicious, scurrilous, frequently pornographic attacks between Catholics and Protestants, royalists and parliamentarians — read remarkably like a sixteenth-century Twitter timeline. The authorities responded exactly as authorities always respond: with censorship, licensing requirements, imprisonment of printers, and public book burnings. None of it worked.

Every subsequent communication technology has triggered the same cycle of utopian promise, moral panic, attempts at control, and eventual accommodation. The telegraph prompted fears that the speed of information would outpace the human capacity to process it wisely. Radio generated its own panic — the Nazis’ masterful exploitation of it for propaganda seemed to confirm that broadcast media was inherently authoritarian, a tool for demagogues to bypass rational deliberation and speak directly to the emotions of the mob. Television occasioned identical anxieties, from Newton Minow’s 1961 condemnation of the medium as a “vast wasteland” to Neil Postman’s 1985 argument that television was trivializing public discourse by subordinating everything to the demands of entertainment.

The internet emerged from this long history with utopian aspirations that seem, in retrospect, almost painfully naive. John Perry Barlow’s 1996 “Declaration of the Independence of Cyberspace” told the “Governments of the Industrial World” that they were “not welcome among us.” The assumption was that information wanted to be free, that the network’s decentralized architecture would prevent any single entity from gaining control. For a brief historical moment, this vision seemed plausible. The early web was a weird, ungovernable frontier that really did feel like it existed beyond the reach of traditional power structures.

What happened instead was social media. Facebook, Twitter, YouTube, and their successors did not merely provide new channels for existing forms of expression; they fundamentally restructured the economics and psychology of public discourse. The platforms discovered — or more precisely, their algorithms discovered — that engagement was maximized not by informing or enlightening users but by provoking them. The attention economy treated human consciousness as a resource to be extracted, refined, and sold to advertisers. Every click, every scroll, every pause, every like was recorded, analyzed, and fed back into algorithms designed to keep users engaged for as long as possible — which, as the platforms’ own internal research repeatedly showed, meant keeping them angry, anxious, and afraid.

Section 230 of the Communications Decency Act, passed in 1996, became the unlikely legal foundation on which this entire edifice was built. Just twenty-six words in its operative clause established that online platforms would not be treated as publishers legally responsible for user-generated content, while simultaneously giving them the right to moderate that content in good faith. This framework, originally designed for a world of small bulletin boards and nascent internet service providers, became the shield behind which trillion-dollar corporations operated as the de facto public square while insisting they bore no editorial responsibility for what appeared on their platforms. Content moderation, performed at scale by underpaid human reviewers and opaque algorithmic systems, became the new editorial gatekeeping — but without the professional norms, public accountability, or legal frameworks that had, however imperfectly, constrained newspapers and broadcasters.

The emergence of artificial intelligence and deepfake technology has added yet another layer to an already impossibly complicated landscape. When anyone with a laptop can generate photorealistic video of a public figure saying things they never said, the already fraying connection between information and reality threatens to snap entirely. The epistemological crisis that social media began, AI threatens to complete. The current moment is genuinely novel — no previous technology has combined the speed of the telegraph, the emotional power of television, the personalization of the printing press, and the surveillance capacity of the secret police into a single device that people carry in their pockets and check, on average, ninety-six times a day. But the underlying dynamics — who controls information, who benefits from its flow, who is harmed by its suppression — are as old as the cursus publicum. We have been here before. We have never been here before. Both statements are true simultaneously, and the tension between them defines the debate.

That tension runs through every voice in what follows. Five Americans — separated by ideology but united by the sense that something has gone deeply wrong — try to name the problem. They do not agree on what it is. They agree even less on what to do about it. But listen closely, and you will hear a shared unease beneath the arguments: the suspicion that we have built something we do not understand, handed it power we cannot reclaim, and are only beginning to reckon with what it is doing to us.