Artificial intelligence may be the most consequential technology of this century, and decisions made in Washington, D.C., will have enormous effects on America’s geopolitical standing.
Thus far, however, White House policies have stifled AI innovation through overregulation. President Biden’s executive order on AI, for example, included several cumbersome DEI-style mandates, needlessly imposing a controversial ideology on the industry. These included prioritizing diversity and inclusion for AI roles in government and warning against threats of “bias”—never mind how AI systems are primarily biased against conservative views. Some Democrats, like New York senator Chuck Schumer, have argued for comprehensive AI legislation that would overburden the nascent industry and exclude new entrants. The European Union has already experimented with a highly regulatory approach through its AI Act and Digital Markets Act—and as a result, Apple’s new Apple Intelligence features and other AI applications may not see rollout in Europe.
Yet while much regulation may be misguided, the creation and proliferation of technology holds important national security implications. Republican lawmakers recently asked the Biden administration to assess Microsoft’s $1.5 billion deal with the Emirati company G42, the most recent in a trend of AI companies considering major infrastructure investments abroad. The lawmakers pointed out that this deal could result in the transfer of highly sensitive U.S. technology to China. For certain, the U.S. must be on guard against deals with foreign companies that run the risk of allowing sensitive technologies to fall into the hands of America’s adversaries.
To ensure its economic well-being and security, the U.S. must lead the world in AI. America can do this through its vibrant open-source ecosystem, led by U.S. companies and models that embody American values, but also with a more sensitive handling of the most advanced AI technology. Highly general, powerful AI systems will quickly become dual-use technologies—with both civilian and military applications. It is imperative that the U.S. understand the degree of military capabilities AI models have and control their proliferation when necessary.
Microsoft’s proposed G42 deal raises one of the most important national security issues: AI systems must be trained in the U.S., to ensure that the secrets underlying the technology remain secure. Companies such as Microsoft are tempted to work with partners in the Middle East like G42 because of promises of cheap power and quick data construction, but data centers in these countries will not be secure. These same advantages could be obtained in the U.S., but this would require rapid energy deregulation and permitting exemptions for data centers, not additional regulation on the industry. Building critical AI infrastructure within U.S. borders would be a boon for our economy and security.
It’s not just data centers that must be secure but also the labs themselves. An essay by my colleague Leopold Aschenbrenner, a former OpenAI employee, made waves across the industry with its warnings of lax cybersecurity practices at AI labs. These labs have already fallen victim to corporate espionage: Linwei Ding, a Chinese national, was arrested for stealing, in the words of a prosecutor, the “building blocks of Google’s advanced supercomputing data centers.” In Google Deepmind’s own assessment of its cybersecurity, the company graded itself as sub-SL3, a term from a RAND report on AI cybersecurity. Per that report, it would take SL3-level security to stop cyber criminals, SL4 to stop North Korea, and SL5 to stop China. The status quo, then, is severely insufficient—and Google is widely believed to have the best security in the industry. The government must implement strict cybersecurity standards for labs building the most powerful AI models and help them ensure that their employees are not tied to foreign governments.
The Bureau of Industry and Security (BIS), which implemented export controls on advanced AI chips, has reckoned somewhat with the role of AI in global competition, but its efforts have not gone far enough. House Foreign Affairs Committee Chair Michael McCaul’s report notes that BIS “enabled a virtually unrestricted flow of American technology to CCP-controlled companies, facilitating China’s rapid rise as a technological, economic, and military superpower.” To retain America’s strategic lead in AI, the agency must aggressively prohibit the export of dual-use technology to Chinese firms. In addition, BIS needs extra powers to fulfill its enforcement duties—such as implementing security checks on non-U.S. persons working at AI labs, regulating cloud computing, and preventing the export of model weights—as the ENFORCE Act currently seeks to enable.
What the government really needs is a view into how AI systems are evolving and progressing. The America First Policy Institute, for example, is developing strategies for rapidly procuring and deploying AI systems to provide for the nation’s defense. By procuring AI systems, the defense and intelligence communities could gain a hands-on understanding of their capabilities. Through the information-gathering authorities of the Defense Production Act (1950), U.S. officials could also be briefed on new AI systems and cutting-edge developments so that Washington could begin to understand AI progress as well as San Francisco does.
Indeed, the U.S. government should focus primarily on understanding AI, not regulating it. Evaluations like those conducted at the AI Safety Institute and other government agencies can help us keep on top of developments on the frontiers of this technology. These evaluations will help the government see how AI systems could empower hostile state and nonstate actors—but also how we can deploy such systems to enhance our national security.
Photo: Anton Petrus / Moment via Getty Images