14 / 21

The Longest Argument

For as long as human beings have suffered illness and injury, they have turned to one another for relief. Neolithic skulls bear the marks of trepanation — holes drilled into living bone by prehistoric healers whose patients sometimes survived, as evidenced by healing around the wound’s edges. In ancient Mesopotamia, the Code of Hammurabi set fees for surgery and punishments for malpractice, suggesting that who pays for healthcare and who answers for its quality are questions nearly as old as organized society. In Egypt, physicians served in temples supported by the state; in Greece, the Asclepieia functioned as early hospitals where the sick came to sleep and dream, hoping for divine cure.

Rome offered antiquity’s most instructive answer. Its greatest contribution was infrastructural rather than clinical — aqueducts carrying fresh water, sewer systems carrying waste away, public baths promoting hygiene. These were collective investments in population health, funded by the state, available to citizens and non-citizens alike. An individual citizen could not build his own aqueduct; a private entrepreneur would not have found it profitable to drain the marshes that bred malaria. Public health, in its most basic form, was a public good.

After Rome fell, organized care retreated into monasteries. Medieval hospitals were charitable institutions run by monks and nuns — places where the sick poor received care, or at least comfort, as an act of mercy. The wealthy hired their own physicians; the poor relied on charity, folk remedies, and prayer. Medicine itself remained largely ineffective — bloodletting, purging, elaborate herbal concoctions. A visit to the doctor was as likely to hasten death as prevent it. The question of who should pay for healthcare was less urgent when healthcare had little to offer.

Everything changed with modern medicine. The germ theory of disease, antiseptic surgery, anesthesia, vaccination, antibiotics, diagnostic imaging, organ transplantation — the cascade of discoveries transformed medicine from a largely futile art into a genuinely powerful science. But this transformation carried a profound consequence: effective medicine became expensive medicine. A medieval barber-surgeon’s tools cost a few coins; a modern hospital’s MRI machine costs millions. When medicine could do little, its cost was manageable. When medicine could save your life, its cost became a central political question.

The first major answer came from an unlikely source. In 1883, Otto von Bismarck — conservative, authoritarian chancellor of the newly unified German Empire — introduced the world’s first national health insurance program. His motivation was explicitly to undercut the growing socialist movement. The system required employers and employees to contribute to sickness funds paying for medical care. It was not socialized medicine — doctors remained in private practice — but it established the principle that access to healthcare was a matter of national policy, not merely individual fortune. By the early twentieth century, most European nations had followed.

Britain went further after World War II. The National Health Service, established in 1948 under Clement Attlee and championed by Aneurin Bevan, nationalized hospitals, employed doctors directly, and provided healthcare free at the point of use, funded through general taxation. The NHS became a source of profound national pride — celebrated alongside Shakespeare and the Industrial Revolution during the 2012 Olympics opening ceremony. It has also been a source of persistent frustration: long waits, funding crises, rationing debates. But the fundamental principle — that no Briton should be denied care because they cannot pay — has never been seriously challenged by any major party.

The United States followed a path so peculiar, so shaped by accident and contingency, that it requires detailed examination. America’s employer-based system was not the product of deliberate design. It was, quite literally, an accident of World War II. Wartime wage controls prevented employers from competing for scarce workers with higher pay, but the War Labor Board ruled that fringe benefits, including health insurance, did not count as wages. Employers began offering coverage as a recruitment tool. The IRS then ruled it was not taxable income, creating a massive subsidy that made employer-based coverage far cheaper than individual purchase. After the war, this accident hardened into the default — and proved extraordinarily resistant to reform.

The gaps were apparent from the beginning. Employer-based insurance left out the self-employed, small-business workers, the unemployed, retirees, and the poor. Truman proposed national health insurance in 1945; the AMA, wielding the specter of “socialized medicine,” helped kill it. Not until 1965 did Medicare and Medicaid carve out two populations for coverage — without addressing the fundamental structure for everyone else. The decades that followed brought a series of incomplete reforms: HMOs in the 1970s and 1980s, deeply unpopular for restricting choice; the Clinton Health Security Act of 1993, which collapsed under its own complexity and ferocious industry lobbying. Each failure demonstrated the enormous political difficulty of comprehensive reform and discouraged serious attempts for years.

The Affordable Care Act of 2010 was a deliberately moderate, incremental approach. It preserved employer-based coverage, expanded Medicaid, created insurance marketplaces with subsidies, and imposed an individual mandate. Modeled significantly on a Heritage Foundation plan implemented in Massachusetts by Mitt Romney, it nonetheless became the most polarizing domestic legislation in a generation — passed without a single Republican vote, targeted by more than seventy repeal attempts. The debate over Obamacare became a proxy war for deeper disagreements about government, liberty, and American identity itself.

Today, the United States spends roughly $4.5 trillion per year on healthcare — about 18 percent of GDP, compared to 10-12 percent in peer nations. Despite this, approximately 27 million Americans remain uninsured, tens of millions more are underinsured, and American health outcomes are mediocre: lower life expectancy, higher infant mortality, more chronic disease than in countries spending far less. The COVID-19 pandemic exposed every fault line — millions losing employer-based coverage with their jobs, the uninsured delaying care, racial and economic disparities in both infection and treatment, a fragmented public health infrastructure unable to mount a coordinated response.

The United States remains the only wealthy nation without universal coverage. This is not because Americans are uniquely indifferent to suffering. It is the product of a specific historical trajectory: the accident of employer-based insurance, the power of organized interests, political institutions that empower those who block change, racial politics that have undermined solidarity-based programs, and a genuine philosophical disagreement about whether healthcare is a right society must guarantee or a service individuals should obtain through the market. That disagreement reflects fundamentally different moral intuitions about the relationship between individual and community — and it will not be resolved by data alone.

Which is precisely why we must hear from the people who hold those intuitions most fiercely.