Nick Whitaker joins Jordan McGillis to discuss his Manhattan Institute report, “A Playbook for AI Policy,” and the future of artificial intelligence utilization. 

Audio Transcript


Jordan McGillis: Hello, and welcome to 10 Blocks. I'm Jordan McGillis, economics editor of City Journal. On the show today is Nick Whitaker. Nick is a Manhattan Institute fellow and he's just published a report on artificial intelligence titled “A Playbook for AI Policy.” Nick, thanks for joining me.

Nick Whitaker: Thanks for having me.

Jordan McGillis: First things first, explain AI to me like I'm five.

Nick Whitaker: Yeah, so look, AI as a term has been around for a long time, at least since the 1950s, and I think you and I were growing up, we heard about AIs in video games and we heard about AI in software applications, but these weren't necessarily AIs in any kind of significant way. It was, in sort of the strictest sense, automating cognitive labor. You know, the video game would have an NPC that would play against you, but it wasn't really learning, it wasn't really able to operate on its own, it wasn't really able to consider a new idea, or to sort of do anything that it hadn't been directly programmed to do.

That really started to change in the 2010s, when what's called the deep learning revolution kicked off, where models were made that could be trained over large quantities of information, and then those models actually could sort of engage with you in a more open and general way than any kind of previous thing that was called AI had been able to. So you first saw these in sort of computer vision and image networks. There was a program called AlexNet, which was able to recognize objects and images, like a cat or a dog, with far more accuracy than any previous program had been able to.

You then saw this in the most pronounced examples in AlphaGo, where for the first time, a computer was able to not only beat top players in chess and Go, but actually able to do this by simply watching games of chess occur, rather than being sort of given brute instructions about, "In the last four moves of a chess game, what should you do?" like earlier chess programs were able to. So AlphaGo kind of became the top chess and Go player, and then you had LLMs, which I think will be what we discuss today, where models were trained over large quantities of texts, and then have grown to a point where they're actually able to give helpful information, and starting to be seen in useful economic applications.

Jordan McGillis: On the point of AlphaGo and a computer that is learning, can you explain how it watches a game, as you say? How does that learning take place, exactly?

Nick Whitaker: As I understand it, there's data sets of tens of thousands, if not hundreds of thousands of chess games. So basically, the computer has input with these chess games to see, based on which moves in each game, and who won each game, and basically, the moves that result in a win get the thumbs up and the moves that result in a loss get the thumbs down. And through this process, it sort of got not just the ability to kind of solve chess in a deterministic way, where obviously chess can't be solved, given the complexity of the game, but instead was able to get what we might describe, though it's probably not accurate in terms of how the machine actually works, but an intuition of chess, of like what move is the most likely to result in a win, which move would cause a loss?

Jordan McGillis: Right now, in the summer of 2024, LLMs are getting all the publicity, but you're alluding of course to a separate type of AI. Can you distinguish between the different sorts of AIs that we should be aware of and how they can be applied in an economy?

Nick Whitaker: Yeah, to distinguish between LLMs, and what was the other type you're thinking of, just other types in general?

Jordan McGillis: Other types in general, but what would be the broad term for the sort of AI that is playing a game versus an LLM, or are those the same thing?

Nick Whitaker: Yeah, I think one distinction to make here is between narrow and general AIs. So for example, an AI like AlphaGo that plays chess, and Go, and a few other games is a relatively narrow AI, in the sense that all of its training has just been about how to win in these games. Now, LLMs, in some sense, are about predicting text, being able to recreate text, but because the contents of text is so general, they're able to actually engage in a wide variety of behaviors. So if you used an application like ChatGPT, you might've asked it to write a poem, or to help you with a homework assignment, or to write a recipe.

I like to use it for cooking sometimes, yes, to do any number of things. I think obviously text isn't the end of the story. If in some kind of full sense of AI, we'd want AIs that were able to see, and to listen to audio, to go about in the world, sometimes these are called multimodal AI systems, and this is sort of seen as one of the current areas of AI development. How closely this is going to be to an LLM I think is kind of up for debate still, whether we're able, could be using very similar techniques and not just input text, but also input images, and we'll end up with something much more capable, as a few of these already exist of course, or whether it's going to be some kind of newer architecture that's quite different than an LLM, still up for debate.

Jordan McGillis: As someone who speaks, reads, and writes English, I'm using ChatGPT in this language. Is the capability available in a variety of languages, or is this right now primarily an English language phenomenon with something like a ChatGPT?

Nick Whitaker: Yeah, in a variety of languages, and one of the most surprising things is that these models are incredible translators. So without actually doing anything to the model, you're able to sort of fully correspond with it in a language like Spanish or French. There's a chance that it works less well in some East Asian languages, for example, but I can't speak to that.

Jordan McGillis: I presume that the totality of the internet is weighted toward English, because so many of our advances in these technologies have been in Anglophone countries. Am I correct in that presumption that there might be an English language bias in the internet at large, or am I completely wrong?

Nick Whitaker: You know, I think that's probably right to some extent, but we have seen things like, for example, by giving a model additional data that pertains to computer science, so extra lines of code, the model's actually gotten better at reasoning in other domains simply because, in some sense, the logical structure of code is similar to the logical structure of maybe a philosophical argument or something. So the ability to kind of generalize between languages, between domains is I think one of the things that makes me quite optimistic about the future progress in the technology.

Jordan McGillis: And your report, of course, is about how policymakers should think about AI. So I want you to imagine a scenario, imagine you have an audience with the President and his cabinet, everyone from the Secretary of Defense, to the Secretary of Transportation, to the US Trade Representative, what are the most important things that you would want to convey to this group and what policy areas would you put the greatest emphasis on?

Nick Whitaker: Yeah, there's two ideas that I'm really excited about right now within my report, and I think the areas go together really nicely, those are massively expanding energy production in the United States and increasing the cybersecurity at labs. So why these two things and why do they go together? There's been a lot of reporting recently about AI companies considering investment abroad, particularly in kind of like MENA countries such as the UAE or Saudi Arabia. Now, I worry about this, because I think that there's a chance that AI models get stronger quite quickly, and within the next few years, have substantial dual use capabilities, meaning that they can be used in both important civilian and military contexts, and perhaps even could become sort of a key weapons technology, in that they'd be able to make formerly unsophisticated actors sophisticated in their execution and planning of attacks.

So why not put them in Middle East and North Africa? I think there's a lot of reasons. One of those is because some of the companies that these labs are considering working with have ties to both the national governments in those regions and to Chinese governments. So I think, over a wide range of cases, it's an untenable security situation to be putting basically the models that could become key to US national security in places where we can't control whether or not they're extradited to either China or to other countries that we're not exactly friendly with. But why are the labs doing this in particular? I mean, I think they don't want to do this, but there's a problem that the energy investments in modern AI systems are increasing so rapidly, while the US has basically not been able to increase its energy production much in the last 10 years.

So what do we do about this? I think we need an energy compromise. I think we need to massively reform the permitting system in the US. I think we need to allow the federal government to license its lands to people that are able to expand energy production. I think this is very possible within the US, and I think if we approached labs and said, "We're going to make it possible to do AI training in the US, but what we're going to ask from you is we're going to ask you to take the national security implication seriously," so that goes into the second thing. There's been widespread speculation that both these labs are highly vulnerable to cyber espionage and even personnel-based espionage, where foreign nationals in the company are leaking secrets.

There's very pronounced examples of both these things happening. So there was an engineer who worked at Google who was arrested for downloading key secrets to Google's AI technology, and you know, this was not a James Bond criminal mastermind. He was just taking company documents, copying and pasting them into Apple Notes, exporting those notes as a PDF, sending it to his personal email, and sending it to friends back home in China. So I think this is really worrying, and I think that there's sympathy from labs too, and I think that saying, "We're going to be able to expand electricity production to do this work in the United States," is something that I think a lot of people on all sides of the political spectrum can get around, and we're going to keep the American people safe while we're doing it

Jordan McGillis: On the energy point, can you contextualize the energy needs that AI is presenting to us?

Nick Whitaker: Yeah, so roughly speaking, the power used in training single state-of-the-art LLMs is increasing by one order of magnitude every two years. So in 2022, about 10 megawatts of power was used to train Frontier LLMs. And again, this is just the training, this is not the use of the model, but this is training one model, like GPT-4. And this is just estimations, we don't know exactly the numbers from the companies, but in 2024, it was something like 100 megawatts. In 2026, that means we would need something like one gigawatt, which is approximately the power generated by the Hoover Dam or a nuclear power plant.

And this isn't sheer speculation on my behalf. You know, Amazon recently bought a data center in Pennsylvania that was powered by a nuclear power plant, with I think about 0.6 gigawatts, but if you just keep extrapolating, that would be 10 gigawatts in 2028, the power required by a small state. In 2030, it would be 100 gigawatts, which is about 25% of US energy production. And I think even if you don't believe that AI will continue scaling this fast, it's just clear in so many parts of the economy, and manufacturing, in the United States, and simply the cost of energy bills in some places where I live, like New York, that more electricity would be a great thing for this country, and this is just a galvanizing force to help us get there, I think.

Jordan McGillis: Okay, so it's the first the training energy demands that we're concerned about, and then the utilization of those models post-training that also presents an enormous energy strain potentially, I assume, right?

Nick Whitaker: Yeah, of course. I mean, this is kind of a funny thing. So we can sort of infer the cost of usage by the cost of models. So when GPT-4 first came out, it was about, I think, in the order of 10X more expensive than it's today, which means that the new algorithmic efficiencies that have been discovered at labs haven't yet gone into training, or I mean, they might be currently training GPT-5, but at the very least, they're using those algorithmic efficiencies to reduce the cost of inferring onto the sort of GPT-4 family of systems, like GPT-4o. So for inference costs, power needs do come down gradually after the model is released and continual discoveries are made, to do it in a more energy efficient way, but that means that the next model cycle is going to have all of the more intensive energy demands for training.

Jordan McGillis: And when you're talking about AI labs and the companies that would need to buy into this compromise you're describing, regarding energy usage, and security, and the geopolitical concerns, are you referring to big companies like Google? Are you referring to the more startup places like Anthropic, OpenAI? Are those all encompassed in the sorts of AI labs you're talking about?

Nick Whitaker: I think it's really those three, and to a lesser extent, Meta or a company like Mistral. Basically, and you hear this from people within the labs, the work that they're doing has gotten so far ahead of both academia that academic papers aren't really read at these places, and at many startups, and I don't mean Anthropic and OpenAI in this context, but I mean kind of newly-formed YC startups that aren't able to make the capital expenditures to invest in these large training runs, that it's really just these three to five labs that are sort of leading on the core algorithmic technology behind AI systems. So yeah, those would be the places that would need to join in on this compromise.

Jordan McGillis: From the consumer standpoint, someone such as myself who will use an open AI product, ChatGPT, to help me with research or maybe edit a portion of an article or something like that, what are the leading options I should be considering other than the most famous, ChatGPT?

Nick Whitaker: Yeah, I think we're in a place where all the leading systems, from OpenAI, from Anthropic, and from Google are all quite good. I've recently been very impressed by the new Anthropic program, which is called Claude 3.5 Sonnet. It's amazingly helpful. I mean, some things that are pretty remarkable, you can do things like, as someone who's not highly technical, you can generate extensive... You can turn raw text into programming commands very quickly, and are actually able to program somewhat well using it.

But I think offerings are currently relatively similar. And I don't mean that they all work exactly as well for every task, but I think they're within sort of 25% as good as each other, the leading models from Open AI, Anthropic, and Google. In part, this is because all research up until, or excuse me, most research done by these labs was published and often patented up through 2022. I think in the next two years, you're going to see a lot more difference in product offerings and capabilities, as some labs are able to discover key new algorithms and some labs fall further behind because of this. So I think in the next generation or two, the differences between models could become much more pronounced.

Jordan McGillis: You touched on something that I've been thinking about as these technologies develop, which is the economic opportunity that learning to code presented in the past, and may present to a lesser extent in the future, as AIs become so competent at writing code themselves. How do you see the future of programming as a career, especially for people who aren't at the top echelon of the industry?

Nick Whitaker: Yeah, I think it's really hard to say. I think if progress continues rapidly, it won't just be programmers who potentially have their jobs threatened. I think, in a lot of ways, one of the interesting things about this technology is there might be people like skilled carpenters who are in the best shape of any of us. But for coding specifically, I think that, and I hope that we live in a world where everyday users will be interacting with AIs in the next few years that act much more like agents, such that they can say, "Hey, I need to write a program to do this," and it will help my life, and within a few hours, the AI will come back to them and be able to give them the program. And I know that sounds quite fanciful, but I really think, based on conversations I've had and based on tracking the trajectory of these products, that we might see that actually within one to two years.

Jordan McGillis: Can you explain this running discourse between the AI safety proponents and the effective accelerationists?

Nick Whitaker: Yeah, of course. In some ways, I think that this is sort of a fake fight in a lot of ways, and it's more of a fight between people of different dispositions than people that really agree, but I do think it touches upon some interesting issues. I think a lot of people in the effective altruism community or the broader rationalist communities made interesting points about AI early on, back maybe in 2010. They said, "This is a technology that's going to be quite important, it's going to rapidly change the way the world works," and I think were sort of paying attention to it when a lot of other people weren't. The problem is I think these people sort of fixated on one key problem or potential hazard of the technology, which is this sort of unintentional consequences of what an AI could do.

And I do think these are real. I mean, I think in any sort of automated system you're deploying, whether it's a heat-sensing missile or anything else, the possibility that the technology goes awry in ways you didn't predict is serious and should be taken seriously. But I think there's sort of been a myopia in those communities of just the chance that the AI comes to attack us and we lose control of it, that what they haven't seen is these broader geopolitical implications. You know, how will AI challenge the balance of power in the world? They haven't thought about the economic implications of how will AI change employment and the labor markets. And I think the effective accelerationist types thought this was all quite funny and really got into trolling these people. But what's funny is, you-

Jordan McGillis: And the name effective accelerationism is itself a play on effective altruism, which is-

Nick Whitaker: Yes, of course.

Jordan McGillis: ... which stands the other side of the debate, for listeners.

Nick Whitaker: Exactly, but a lot of the effective accelerationists, if you really say like, "Do you think that we could be living in a world where AIs are doing most of the cognitive labor, and that they're being deployed in weapon systems, and they're sort of steering every boat and piloting aircraft?" they often are quite skeptical of this, and I don't think for good reasons. I think they think that AI will be something like an internet-sized event, sort of a new platform upon which a new layer of software is created on. I think they could be right about this, but I sort of think that they're a bit too close to the possibility that AI is in fact quite a performative technology that's sort of unlike anything we've seen before, or at least seen in the last few hundred years.

Jordan McGillis: Do these two angles on AI correspond or not correspond with any foreign policy views on the geopolitics with China you were describing earlier?

Nick Whitaker: I think they both don't, and I think both groups are sort of too inattentive to the foreign policy implications here. And I think one of the things that I wanted to do with my report is to sort of carve out a third way more sort of grounded in the politics of today, the systems of today, and what the technology actually looks like in 2024 and will look like in the near future. That's why I sort of emphasize the need for us to build in the United States, the need for us to protect the secrets contained within these labs, but also, in any discussion of policy, to be really attentive to how AI is reshaping the global landscape in terms of the balance of power.

Jordan McGillis: Something you specifically call for in your report is restricting the flow of models to adversarial countries. Can you explain how that's done and what some of the trade-offs are of making that kind of choice?

Nick Whitaker: Of models or of chips?

Jordan McGillis: Of models, stick with that for now.

Nick Whitaker: Yeah, so I don't quite advocate for that. What I advocate is that the Bureau of Industry and Security, which regulates the export of controlled items in the US, has the power in principle to restrict the export of models abroad. I don't think it should use that power today if it had it, I certainly don't think so. I think that it's important to have this power, such that in the future, if a model that truly scares us in its ability to empower military actors, to empower non-state actors, to develop what I CBRN, which is chemical, bio, radiological, and nuclear weapons, I think this could be a huge deal.

And I think in that world, even people who are sort of worried about the current efforts to ban open source, which I'm highly opposed to, I think we'd say at the point where models are able to empower people this much in military capabilities, this is sort of a different paradigm that we've entered into, and we can't just be putting that kind of model outside on the internet. But I certainly don't think that's a point we've come to, and I just think it's an option that's important to have on the table in case things develop quickly, and obviously, the government will, in general, be slow to wake up to these things. And I think having these kind of preparedness measures in place and having policy levers that we can pull on if things get crazy is quite important in terms of what we do now.

Jordan McGillis: You used the term open source in your answer there. Can you return to that and explain what the debate is there?

Nick Whitaker: Yeah, so there's been a group of people that thinks that the current models, which can give fairly rudimentary answers in terms of how one might construct a biological weapon, are sort of too dangerous for the public to have. Now, I think they make some important points, the first of which is that when a model is trained in a post-training process called reinforcement learning, where humans grade the models, they'll ask it, they'll say, "Tell me about how to make a chemical or biological weapon," and they'll give it a thumbs down to those answers to discourage the models from giving outputs on potentially hazardous subjects.

If a model is open source, you can remove that training fairly quickly and cheaply. I believe in one paper they did it for about $200, to Meta's Llama models. Now, they infer from this that we should ban open source models entirely, either through a strict liability regime or through other legal mechanisms. I simply think the evidence doesn't add up. If you look at this sort of information that you're able to get on Google about how to create a chemical or biological weapon, it's of a similar level of helpfulness to what a model will give you even after those sort of protocols are removed.

And we've lived in a world with the internet for the last 20 years, and it doesn't seem to be a major problem. And I think that we should have some degree of trust in our people and trust in our law enforcement that we're able to operate a world with some level of danger. If we didn't believe that, we would have to prohibit access to information to a completely intolerable extent. Again, I think the question is whether, at some point down the road, there's this paradigm shift such that AI becomes something like or it can be used to be something like a weapon of mass destruction. And I think in that world, even the most kind of ardent open source advocates would not be calling for these models to be freely, openly distributed, just as kind of the most ardent proponents of gun rights don't ask for nuclear bombs to be freely and openly distributed.

Jordan McGillis: Okay, I see. There's something that I'm very concerned about with AI, short of the cataclysmic weapons of mass destruction idea, and that is the risk of deepfakes and the resulting political destabilization they could cause. How do you think we'll be able to verify that what we're seeing on screens is real in the coming years?

Nick Whitaker: So a number of labs are looking into techniques to watermark images and text written by AIs. I think these efforts have some merit, but I do think, in almost any case, and I think there might even been proof of this, that these watermarks will be able to be removed. So I sort of think in the case of the student cheating on their homework, they're in middle school, there's going to be tools that a teacher could use to catch them. Now, I think that students should probably be able to use AI to do their homework, because they're going to be living in a world where they're using AI to do all sorts of other things, but that's sort of besides the point.

And then, there's the second case where you have something like the presidential campaign. You know, I'm not terrified of the use of deepfakes in this context. I think there's been a lot of great moments, a lot of great memes in these campaigns so far through the use of deepfakes, when we all knew that they were deepfakes. But I do think using deepfakes in either the capacity of a super PAC or the capacity of a campaign should be disclosed, and that's something I argue for in the report.

Now, there's sort of the last case of hostile actors using deepfakes to spread misinformation. I don't think there is a good solution here. I think if they're able to sort of effectively remove watermarks, and they want to do it, and they don't care about violating US law, I think it's going to be really difficult to sort of police that information. And I think, kind of simply put, we're going to live in a world where we trust a lot less what we see in video and hear in audio, in the same way that we learn to not trust everything we read on the internet, or at least some of us have learned that.

Jordan McGillis: Undoubtedly, there are some downsides to AI as you've discussed here, but there is so much upside. I want to close with this final question, what are you most optimistic about with AI?

Nick Whitaker: Look, I mean, I think a moment we've all had in life was that we wish we had a personal assistant, basically, and you know, most of us can't afford one, but the idea that someone could book a car reservation for you, or manage your schedule, or order any number of little annoying life services, but typically, that person costs $70,000 a year, and I'm going to leave you with a prediction that I really believe, within one to two years, you're going to see AI systems that aren't full agents in the sense that they can operate around the world, but will at least be able to operate on your computer screen, using your applications, with your internet browser, and are going to be able to act like something like a personal assistant, in the ways that a smart 25-year-old who's being paid $70,000 a year to be a personal assistant can be one today.

Jordan McGillis: Well, God willing, we'll have that soon.

Nick Whitaker: I hope so.

Jordan McGillis: All right, we've been discussing Nick's report, A Playbook for AI Policy. Nick, thanks for joining us. Where can our listeners follow you on the internet?

Nick Whitaker: Yeah, I think just following me on Twitter, I'm Nick Whitaker, @ns_whit. I'd love to update with more of my work there.

Jordan McGillis: Fantastic. Thanks for joining us on the show, and you've been listening to 10 Blocks.

Photo by Ed Jones / AFP via Getty Images

More from 10 Blocks