Digital quantification determines Americans’ quality of life. Algorithms select job applicants for interviews and employees for performance bonuses. They aggregate stories and products as we shop for news and goods, matching our preferences to the infinite bounty on offer. And they determine which homes we can buy, purchases we can make, and investments we can pursue. In love, the whims of Hinge’s matching algorithms will determine our romantic fate; in health, a nonprofit network will use its algorithm to allocate a kidney or liver donation—saving one life over another.

Algorithms dominate our lives because commerce dominates our lives. Competitive companies have a strong economic incentive to replace expensive and inattentive human decision-makers with reliable and cheap computational ones. For most, the weeks-long work of securing a mortgage, for example, has been replaced by faster digital approvals available through a website or app. The transition is so complete that the rapturous wonder of these new technologies has mostly subsided, replaced by astonishment when we stumble upon the old ways such things used to be done.

Government, ironically, is one place where direction by algorithm has barely made a dent. Even after decades of digitalization and “Government 2.0” initiatives, the plodding ways of yesteryear remain the ponderous processes of today. Examples abound. Social Security disability decisions take three to six months, with more than 1 million people waiting in the queue. Applying for a passport takes six to eight weeks, which the State Department recently described as returning to “our pre-pandemic norm”—as if that were cause for celebration. Immigration decisions frequently take a year or longer to process, while Americans applying for the Global Entry travel program have experienced wait times expanding to almost a year. Potential defense and intelligence professionals have seen security-clearance approvals extend to 170 days in 2023, almost double from the year before.

The yawning performance gap between the private and public spheres represents a crisis. State incapacity wastes taxpayer funds and time, while deepening pessimism about the U.S. government’s basic competence. Companies and governments have transitioned to digital record systems over the past few decades, but only the private sector has taken advantage of the speed and efficiency offered by artificial intelligence (AI) systems. Yet, despite the success of algorithms across the private sector, critics would all but ban their entry into government decision-making, worried that their “black box” nature makes them incompatible with democratic transparency.

Rather than enter a technological cul-de-sac, federal, state, and local governments must stay competitive with the private sector’s best practices. That means taking more humans out of the decision-making loop. Humans and machines are both ultimately black boxes; decision systems can be intentionally designed for optimal transparency and due process. Far from a computational dictator usurping the powers of free citizens, AI, properly implemented, is just another extension of a well-functioning republic.

The U.S. government pioneered the very digital technologies that it now fails to use. Internet history is well trod, but it’s worth recalling that the Web solved some significant Cold War problems. Until the 1950s, “computers” for calculating missile trajectories were humans (typically women). The potential scale of war with the Soviet Union, though, would outstrip even a large and well-trained human workforce, forcing a frantic search for automated solutions. That led the Pentagon to fund efforts like Project Whirlwind at MIT, which invented real-time computation and friendlier multiuser interfaces.

These early federal initiatives would expand rapidly in the 1960s, often when human labor proved insufficient to meet rising demands. Heavy mail volumes led the post office to start using automated sorting machines, powered by advances in optical character recognition of address labels. The Vietnam War would dramatically expand the demand for automated weapons targeting, pushing the Pentagon to supply heaps of research funds to such universities as Stanford, Carnegie Mellon, and MIT. The IRS, burdened by the increasing complexity of federal taxes and a postwar baby boom entering the workforce, bought its first computer in the early 1960s, using an “estimated 500 miles of tape” with “600 clerical workers punching out 50 million cards a year.”

Digitalization has been the watchword of civic technologists ever since. Their reasoning is simple: first come data, then come the applications. Governments already collect staggering quantities of information—but without digitalization, those data were stored on paper. The technological transition has seen some spectacular failures, but more of government is digitized than ever before, and the job will soon be complete. For example, after decades of work, the IRS has a mostly digital system for processing tax data, with tens of millions of taxpayers using tax-preparation software and filing electronically.

As routine as such technologies now are in government, criticism of the transition was sometimes fierce. Digitalization foes rallied under two banners: resistance to certain applications and concern for privacy. The use of computers in Vietnam led student protesters to storm and, in some cases, burn down computer clusters, such as the bombing of Sterling Hall at the University of Wisconsin–Madison to halt work of the Army Mathematics Research Center. “I was a reserve officer in the Army Corps of Engineers and pursuing my doctoral degree in structural engineering,” recounts Richard Gutkowski in an alumni-magazine retrospective. “For months if not years after the bombing, we took our boxes of work home every night for fear of losing it to protesters.”

Critics also considered government digitalization to be the first step toward Orwellian totalitarianism. Such fears animated the cyber-libertarianism of the early hackers who built the Internet. That philosophy was best captured by John Perry Barlow in his influential “A Declaration of the Independence of Cyberspace,” where he proclaimed: “I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

An earlier federal technology initiative: heavy mail volumes led the post office to start using automated sorting machines. (Mark Boulton/Alamy Stock Photo)

But both protests ultimately withered. On specific applications, the fight against the computer was really a fight against the means and not the ends. As for privacy, it remains a keen worry among critics, but the widespread proliferation of tracking, recording, and surveillance technologies by the private sector rather than government has pushed some, like social psychologist Shoshana Zuboff, to warn about “surveillance capitalism” more often than about surveillance government.

And crucially, one protest—that computers are black boxes incapable of being understood by humans—never landed. Computers were relatively simple, and instructions coded into punch cards made their mechanical and deterministic nature obvious. Even as algorithms became more sophisticated and computers digitized, the constraints of computing power ensured that they remained well within the ken of human understanding. For decades, practically no cases arose across government where an artificial intelligence made an independent decision without a human involved. Today, that’s the threshold we are fast approaching—and potentially passing.

Though the U.S. government has fallen behind on digitalization (especially compared with many international peers), the transition from digitalization to algorithmic decision-making is nevertheless raising widespread anxieties. Collecting data is one thing; replacing government workers with an algorithm is quite another. Yet that is the necessary step to improve government efficiency. Whether man or machine, a black box surrounds any government decision—and data can evaluate which works better.

Over the years, as computational power grew exponentially, well-studied and deterministic algorithms increasingly gave way to AI systems impervious to rigorous human inspection. Much like how the weather follows the laws of physics even as forecasters fail to make accurate predictions, the exact mechanics of how these systems work remain unknown to computer scientists. Such machines are a form of mathematical chaos, where small perturbations can lead to widely divergent outcomes. For the first time in the evolution of computing, we witness the so-called black box. The input and the output both make sense; what happens between them remains a mystery.

Take Google’s search engine, one of the world’s most complex information-retrieval systems. Tens of millions of lines of code crawl trillions of webpages and data sets in a hunt for the best information to display for a search query. Evaluating search quality is too gargantuan a task for humans, so quality-assurance algorithms are constantly evaluating user behavior to determine whether Google is succeeding or failing. At this scale, algorithms govern algorithms, and humans are merely occasional observers.

Or take OpenAI’s ChatGPT, a large-language model (LLM) that has received rapturous attention over the past two years. The largest LLMs are trained on trillions of pages of text, producing an AI model of vast complexity. OpenAI, like Google, regularly tests quality through algorithmic benchmarking, since no human can comprehensively evaluate the system. As prominent mathematician Stephen Wolfram put it: “In effect, we’re ‘opening up the brain of ChatGPT’ (or at least GPT-2) and discovering, yes, it’s complicated in there, and we don’t understand it—even though in the end it’s producing recognizable human language.”

It’s a black box—and an unenlightened one, to boot. (See “Something Like Fire,” Winter 2024.) These LLMs have no concept of “truth,” as humans would understand it. In a podcast with me last year, noted AI researcher-cum-critic Gary Marcus mentioned this lack of truth in his prediction that the first deaths due to chatbots are imminent. “Part of my premise was that a lot more people are going to use these AI systems,” he said. “They’re probabilistic. They’re not reliable. And so they are going to give some bad advice. Their cousins have killed people in driverless cars. And now, the text versions probably will do the same, and it’s a question of who and when and how many and so forth. . . . Don’t listen to your chatbots. They might kill you.”

 “They might kill you” is an extreme scenario, but other analysts have more mundane concerns. Virginia Eubanks, a political scientist and the author of Automating Inequality, views these black-box systems as worsening society’s existing systemic biases. “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the health-care system, or cross national borders,” she writes. “That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.” Indeed, these fears have spread so widely that a whole library of books has been written on the subject in recent years, with titles like The Black Box Society, Weapons of Math Destruction, Algorithms of Oppression, The Age of Surveillance Capitalism, and Artificial Unintelligence.

These critics of the quantification of life make an important point: our lives are dominated by algorithms, almost none of which is open to public inspection. Even if we could inspect them, their black-box complexity ensures that we will likely never understand them anyway. But all these decisions were made until quite recently by that other black box: the human mind. Not long ago, online mortgage applications processed by machine-learning models were instead evaluated by mortgage officers sitting in a client’s bank branch. Humans can be persuaded as much by a client’s demeanor and quality of clothing as by the credit score sitting on the page. Small-business lending was once driven by a loan director’s personal knowledge of the traffic of a sidewalk block, rather than an algorithm’s knowledge of every new business ever started in America and its probability of success. Are we sure that we’re better at making decisions like these?

“Algorithms already govern algorithms on the quality of their outputs, so why not extend that governance to the quality of their thinking as well?”

For all their imperfections and inconsistencies, algorithms replaced humans in certain tasks because of our own imperfections and inconsistencies. Automation is a function of algorithmic reliability as much as of human fallibility, and the threshold between the two continues to move.

Take driving. Human drivers have dominated the roads since the invention of the automobile, despite millions of vehicle-related deaths, including 42,795 people killed in car crashes in the United States in 2022. Waymo, the autonomous driving subsidiary of Google, published data last year showing that its platform now statistically outperforms human drivers on accidents and crashes by miles driven. (See “Accelerate Autonomy,” Winter 2024.) We may hold a psychological kinship with the taxi driver sitting in the front seat, but even here, we are crossing over to a new world where algorithmic performance will outclass the human.

Critics like Marcus and Eubanks point out that algorithms have serious flaws, and they’re right. Algorithms can have bugs, biases, and inefficiencies. They can be badly engineered, and updates can turn a well-functioning system into a malfunctioning one. Yet the human alternative has just as many downsides, if not more. Humans can be biased, get tired, lose focus, slow down, make mistakes, fill out the wrong box, and outright lose their minds. They also need to take breaks, eat, and sleep. Man or machine, the black box of decision-making is the same—and requires the same steps to mitigate.

Transitioning toward automation in government requires acknowledging the black box of both humans and computers and then selecting the best approach for implementation, all while buttressing these decisions with proper due process. Even more importantly, though, AI can’t just be the next step of digitalization that has taken place since the 1960s. It must improve the interactions between citizens and their government.

Humans make plenty of errors, which is why due process is so fundamental to liberal democracy. When a government official makes a decision that we disagree with, adjudicative remedies exist to ensure that those decisions follow the laws. Algorithms should not be an exception to such oversight. We can’t just transplant AI into government. Instead, we need to integrate it thoughtfully, in a way that ensures robust transparency and offers easy avenues to correct any of its mistakes. This is often where private-sector efforts at automation go wrong and can cause such rage among consumers: due process is costly, and thus, companies often give short shrift to building more robust systems. The government cannot act so cavalierly.

Even with proper due process, transparency remains an important value. Entrepreneurs and researchers are exploring a field dubbed “AI explainability,” which tries to open the algorithmic black box and describe what’s actually taking place. Maybe AI will always make mistakes, but what if another software program running in parallel could essentially cross-check the algorithm’s thinking and ensure that none of its choices is defective? Algorithms already govern algorithms on the quality of their outputs, so why not extend that governance to the quality of their thinking as well? It’s an area of interest that has received hundreds of millions in federal research grants and venture capital dollars, though outcomes so far have been limited.

A second approach is to design machine-learning models that trade precision for better explainability in the first instance. Computer scientists Michael Kearns and Aaron Roth, in their book The Ethical Algorithm, describe encoding a principled philosophy into these systems, writing that “we view these new goals as constraints on the learning process. Instead of asking for the model that only minimizes error, we ask for the model that minimizes error subject to the constraint that it not violate particular notions of fairness or privacy ‘too much.’ ” It’s not uncommon for a slight decline in accuracy to improve human comprehension of the model massively.

Improving due process and model explainability is necessary, but not sufficient, for easing the transition to a more automated government. The final task: dramatically improving the experience of interacting with government. As any citizen knows, the government is filled with faceless human decision-makers making life-changing decisions in far-flung offices without explanation. This doesn’t have to be the case, and AI offers a path forward toward a faster, cheaper, and more interactive approach.

Take immigration. When applying for a benefit like a green card, an applicant collects all his information (residency, income, travel, taxes) into a package that can swell to hundreds of pages of documentation and then mails it to the federal government for evaluation. A year later, an examiner will open that bundle of papers and read through them to judge the application. An algorithm, by contrast, could instantly identify and verify the materials that a user uploads, and also tell him to stop providing excess information if it assesses with sufficient probability that the materials are accurate and conform to the nation’s laws. Then it could quickly assess the probability of success and assign a human evaluator or conditionally accept the application. A hand-wringing year of unknowing nervousness could be replaced with near-instantaneous feedback. Such technology is available today and is in widespread use in the private sector.

Government embrace of AI could help reduce frustrating inefficiencies, such as long waits for services. (Angela Weiss/AFP/Getty Images)

This exact pattern—of compiling and submitting documentation while waiting for a decision—practically defines government. It’s not hard to imagine similar automated systems being used to improve applications for Social Security, tax benefits, housing vouchers, income-based repayment plans, federal mortgage programs, driver’s licenses, fishing and hunting and gun licenses, zoning variances, veterans’ health coverage, and much more. Many states already offer websites to determine what documents a person must bring to the DMV to apply for a driver’s license. Adding automated intelligence would turn that static final list into a verifiable checklist where all documents could be uploaded and a final decision made practically instantaneously.

Pushing one tentative step forward, we could even use AI to conduct the first reading of cases filed in small-claims courts. In New York City alone, more than 40,000 small claims get filed each year, many of them reasonably simple matters where today’s LLMs are already capable of parsing the correct response. In a bid to reduce the judicial system’s financial burden, companies have widely used controversial arbitration clauses in their customer agreements to direct complaints away from the courts. A compromise that reduces costs and risks for civil plaintiffs as well as defendants could involve carefully calibrated AI decision models.

Transitioning to a government with fewer humans involved will require careful due-process considerations, better model explainability, and an emphasis on improving the experience of citizens in their government interactions. We may never open the black box behind these decisions, but we can ensure that the decisions made by any black box—human or machine—are perceived as fair.

In a republic, citizens aggregate and assign some of their natural rights to others. We empower congressmen and judges to make decisions in our stead, aware that we are ceding some power in exchange for freeing ourselves from the burdens of direct democracy. Why should those assignments only involve other humans? Can’t we have a republic of algorithms, too—one that is faster, cheaper, and fairer?

We can, but it’s a treacherous path forward. We have invented an astonishing new set of capabilities, which invokes the wonder of the digitalization of the economy over the past few decades. Deploying these new capabilities in government will take patient but persistent effort against powerful forces, ranging from public-sector unions concerned with automating away jobs to technology critics apprehensive about algorithmic bias to voters fearful of change.

Nevertheless, we need to hold two thoughts in our head simultaneously: algorithms are not perfect, but they can also be better than human decision-makers. As Marcus said, “I would really like people to understand that artificial intelligence is not a universal solvent, that really, what we have [is] a bunch of different mechanisms. They each have their own strengths and their weaknesses. None of them are that great right now. They each in certain contexts work very well.”

As the private sector has repeatedly shown, consumers overwhelmingly choose automated systems over the manual processes of the past. The formidable success of America’s digital economy is built on the formula that convenience married to accuracy works. Americans have built some of the world’s most valuable companies by following this ethos. Government shouldn’t be an exception to these improvements, which will leave citizens freer to pursue their happiness.

Top Photo: chombosan/Alamy Stock Photo

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next