Danny Crichton joins Brian Anderson to discuss how AI and algorithms can be applied to streamline government services. 

Audio Transcript


Brian Anderson: Welcome back to the 10 Blocks podcast. This is Brian Anderson, the editor of City Journal. Joining me on the show today is Danny Crichton. Danny's a Fellow at the Manhattan Institute where he analyzes technology growth and economics, and he's the head of editorial at the VC firm, Lux Capital, where he publishes the excellent Risk Gaming newsletter and podcast. He's written regularly for City Journal. Today, though, we're going to discuss the case for using artificial intelligence in government, a topic he explores in a fascinating essay in City Journal's summer issue entitled “United States of Algorithms.” So Danny, thanks very much for joining us.

Danny Crichton: Thank you, Brian.

Brian Anderson: So as you note in this essay, algorithmic decision-making touches virtually every aspect of our daily lives now, particularly in the private sector. Artificially intelligent systems, they recommend the news items people read, the products they buy. These days, even the people they date. And we've largely grown comfortable with that, though, some exceptions hold there. But as you've noted in this essay, government's been very slow to adopt these new technologies. So I wonder if you could outline, first, why that matters and what it has cost us.

Danny Crichton: Right. When you look at the private sector, there's obviously immense competition between different firms in an industry. And so automation, the ability to lower labor costs, to provide a better customer service experience, for consumers, obviously, this is a very high priority. CEOs over the last 20, 30 years have moved quickly to move automation, as you noted, across finance. When you apply for mortgage, a lot of that is now automated. When you apply for relief from a customer service agent when you're flying on an airline, almost all that is automated behind the scenes these days. But then you get to government and, obviously, the government has a monopoly, it has no competition. And so what we've seen is an extraordinarily slow process of adding automation to different systems.

Anyone who's been to a DMV, a Social Security office, basically any kind of consumer-centric experience with the government understands how slow this process still is. And there's a huge divide now between private sector and government. You get this sort of instant relief often in the private sector and you wait months when you're working with the government. And so algorithms, to me, is an opportunity for the government to show that it can be responsive to people that can be more efficient and ultimately offer a better service for everyone.

Brian Anderson: I wonder, you run through some of these in your essay, what are some concrete examples of government functions that could be improved by AI? You just mentioned getting a new license or something like that. Certainly, DMVs have long been a source of frustration for residents of various towns and states.

Danny Crichton: There's a huge amount, it's almost overwhelming to think of the number of applications. Because if you think about government, a lot of it is just you apply for a benefit or you apply for an application, it gets adjudicated by some examiner and you get a decision back in the mail. So whether that's taxes, whether that's applying for Social Security, Medicare, Medicaid, whether that's applying for a passport, or you're a national security professional looking for a security clearance, in all these cases, you're submitting documents, they're getting evaluated and then a decision is rendered. And so in almost all of those places, algorithms can at least speed up the process. So for instance, when you apply for a passport, you can imagine that your docs are verified. It's looking at your birth certificate, it's doing anti-fraud detection. Today, up from one or two months delay five, six years ago, a passport can take as long as six months today a process, and that's a function of not enough staff and not enough automation to make that faster.

Brian Anderson: I wonder, this raises the question of why, why do you think there's been such a significant lag? You mentioned competition, or lack thereof, in government as one factor, I think, in this slowness in adopting algorithmic systems, but I wonder what can be done to address that lag? I recall an essay we published many years ago by the one-time mayor of Indianapolis, Stephen Goldsmith, on how computer technology could improve government services. This was probably 15, 20 years ago he was writing this. Even that promise only went so far and he was talking about more primitive computer technology.

Danny Crichton: Well, I think when you look at the history of automation today, we use automation in a couple of contexts. So when you send a letter in the post office that label, your address label is read by a computer, it's automated. That is an algorithm that is reading the address and sending it to the right place. And that came in the '60s because there was so much mail and not enough labor. But at some point, we actually had to move to automated systems. And that's been sort of the history of government all along is when the government is sort of overwhelmed by a process, think even like COVID-19, we saw some algorithms in terms of allocations of medication and vaccines. There's just no labor force to actually handle it. And so the government is obligated to do something. It doesn't have the labor force, and so it finds a solution in the form of automation.

I think without that pressure, it gets really hard to make any momentum happen in a large bureaucracy. So to me, one of the most critical examples of this is security clearances. In the United States today, it could take up to 14 months for someone looking to join the Defense Department, the military, the CIA, the intelligence community and get their clearance. And that's assuming that you don't have international travel or extensive international contacts. That's just the baseline speed at which it goes. And that means that you're actually held up on your career more than a year trying to join the government and make America a safer place. But even that pressure, even on something so important as patriotic citizens joining some of our most important national security institutions, there's been no kind of reform to say, how do we make this faster? How do we verify some of the documents and at least maybe shave a couple of months off?

This was actually only about five to six months, again, a couple of years ago, pre-COVID, inched up all the way up to 14 to 15 months, and yet, we still haven't seen a lot of change in the system. And so I think you have to focus on these pressure points. You have to get a little bit of focus around, obviously, the public sector unions where there's a lot of incumbency and vantage, and there's a question of what happens to these jobs assuming there's automation. There has to be a focus on countering, I think, some of the anti-algorithmic bias. So there's a whole slew of books which I talked about in the essay that have been written the last couple of years, and I think there are really good arguments against those. But clearly, the cultural vanguard is to be fearful of AI. And I think there are very rational points that make sense to oppose that. So I think you have to hit it intellectually, economically, politically, basically in all factors trying to move the government towards a more automated system.

Brian Anderson: Well, you mentioned some of these concerns about bias. Now, the idea of using algorithms in government decision-making is indeed worrying to many people. It raises questions about transparency and accountability. Others do point to the question of bias and the reality that these AI systems are, in fact, black boxes. They're hard to even understand what's going on inside of them. So how can governments, in your view, allay these concerns? How many of them are legitimate? Especially when you do acknowledge in the piece that the complexity of AI models like ChatGPT, the complexity is so vast as to defy anyone's ability, really, to know what's going on underneath the hood.

Danny Crichton: Well, you're getting to the crux of the thesis of the essay. And to me, I'm not a wild-eyed idealist of AI. I don't think AI should take over everything instantly, automatically tomorrow. I’m a bleary-eyed skeptic, and certainly, kind of a small-C conservative of we should make change deliberately and make sure these systems are robust. I think where the mistake in a lot of the language and politics around this issue lie is that algorithms are considered sort of this black box, that information goes into them, a decision is rendered and we don't really understand what's going on. And that's true, we do not have the means to understand the models that underlie LLMs, OpenAIs, ChatGPT, or any of these other AI systems that are going on.

But there's a caveat here, which is when you apply for a benefit or a clearance or a passport from the US government, you submit those documents into a black box bureaucracy, it goes to a human who you don't know, you don't recognize, you don't even know what office it's going to, and a decision in the form of a letter is sort of registered back to you in the mail and you have no way of knowing what actually took place. And so we know that humans can be black boxes and bureaucracies can be black boxes, and it's true of algorithms as well. And the way you solve for that black box is you put due process measures in place. So if you get a decision, your passport was denied, you can appeal it to an administrative court. If you apply for basically any benefit, there are means of redress to those problems. And those means should be applied just as much to new technological systems as to the human systems that have existed for the last 200 years in our country's history.

Brian Anderson: Let me just follow up a little bit on that. So we're in an information environment in which new revelations are surfacing weekly about government efforts to stifle information, even shut down dissent. We've saw Mark Zuckerberg with his letter yesterday to the House on how he faced incredible pressure from Biden administration officials to censor certain content on the platform, the Facebook platform, especially as related to Covid-19. So how are Americans going to be or expected to feel comfortable about outsourcing government services to AI? Especially when something like ChatGPT, at least the last time I looked, wouldn't even admit basic realities like the fact that Donald Trump had been shot. So you can understand how this might encourage a certain paranoid response.

Danny Crichton: Well, and I think if I had to position it again, I'm not a wild-eyed, I'm a bleary-eyed skeptic. And so the way you introduce this is very slowly. So this is not about completely transforming all of government and we're going to have an AI president run the country, right? There was actually recently news in the last week in Wyoming that a mayor is running as an AI mayor and that all decisions in the government are going to be run by AI. And that's not what this article was getting at. My focus is much more on that sort of postal address problem, which is to say, look, if you're evaluating passports, you're looking at millions of birth certificates a year, and that is currently done by a human, and there's no reason that should be done by a human. And so, I can come up with a conspiracy. A politician tries to do something, you can come up with a lot, but the humans who are working in that office face the exact same pressures. And that is what I think is the focus of this article is whether it's human, whether it's AI, the same systems, the same politics, the same pressures applied to both. And so this differentiated argument, the idea that AI is particularly susceptible as opposed to say, the federal workforce, to me, is a complete misnomer.

Brian Anderson: A final question, Danny, in terms of where various countries are in applying this advanced technology to government services, where do we stand? Has any country shown a lead in this area?

Danny Crichton: I don't think there is one that you would point to and say, "Well, they're just light years ahead of the United States." Interestingly, there are countries that are much more electronic in terms of medical records, passports, digital ID cards, et cetera. And that comes from America's history with a decentralized identity system. We use driver's license at the state level, we don't have a national ID. But in terms of the actual algorithms and using this in, say, the court system or in passports or other places, it's still early because the technologies that allow us to do this, the large language models we've seen introduced in the last couple of years are very, very new. They've just come in the last two to three years. And so we're right on the vanguard to potentially be a leader in this space. And given that we own the companies and most of the companies that are doing the most pathbreaking work are in the United States, to me, there's a huge opportunity to continue strategically using industry for the benefit of government and vice versa.

Brian Anderson: Well, thanks very much, Danny. That was an excellent walkthrough of your essay. Don't forget to check out Danny Crichton's work on the City Journal website, including this piece we've discussed today. It's called, “United States of Algorithms.” It's in our Summer 2024 issue. We'll link to Danny's author page in the description where you can find that piece and his other work for City Journal. You can find Danny on X @DannyCrichton, and you can find City Journal on X, as well, @CityJournal and on Instagram @cityjournal_mi. As usual, if you like what you've heard on the podcast, please give us a nice rating on iTunes. Danny Crichton, great to talk with you.

Danny Crichton: Thanks, Brian.

Photo: gorodenkoff / iStock / Getty Images Plus

More from 10 Blocks