As widely reported, Google recently paused its Gemini chatbot from generating images of people after the bot refused to depict whites. Gemini users saw results including Native American founding fathers, black and Asian Nazis, and black female Roman soldiers. Those and other absurd results went viral, prompting Google to apologize.
According to a Google-published paper, the chatbot’s fine-tuning techniques were intended to increase diversity. Those techniques, which the company uses to change the output of machine-learning models like Gemini, resulted in the chatbot avoiding depicting whites and males in response to prompts that would stereotypically yield such results.
Google dedicated a section of the paper to how Gemini would avoid “representational harms”—that is, producing image results that reinforced stereotypes. The company detailed how it sought to find “new ways to measure bias and stereotyping, going beyond binary gender and common stereotypes.” These “bias” measurements apparently didn’t consider whether a given algorithm would make Gemini’s depictions historically accurate. Instead, they gauged how much an algorithm would reduce the chatbot’s likelihood of stereotypically returning images of whites and men.
These metrics were responsible for Gemini’s unhistorical results. To Google’s algorithm, it didn’t matter if, as a matter of history, the Nazis or the Founding Fathers were 100 percent white. Google’s progressive-influenced code would flag such results as “stereotypical” and update the results to be more “diverse.”
Google insists that it will “do better,” but answers to its left-wing AI product may already be on the horizon. According to a report by data scientist David Rozado, machine-learning models such as Anthropic’s Claude, X’s Grok, and Zephyr 7B Beta are almost politically neutral. That developers are creating more centrist alternatives makes sense, given the incentives. A centrist model, after all, will align with more users’ beliefs (not to mention with objective reality) than will one built by Google’s “Responsible AI” team.
As long as AI remains relatively free of government interference and centralization, those who produce machine-learning models will have an incentive to produce a less ideological product. Provided those incentives remain intact, engineers will be able to produce large language models, and AI systems of all kinds, that reflect the majority’s views. As “Godfather of AI” Yann LeCun put it:
We cannot afford those systems to come from a handful of companies on the West coast of the US. Those systems will constitute the repository of all human knowledge. And we cannot have that be controlled by a small number of people, right? It has to be diverse, for the same reason the press has to be diverse.
But government-industry collusion imperils this pluralism. As Google noted in its paper, the company consulted groups in areas “outlined within the White House Commitments, the U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, and the Bletchley Declaration” in building its AI program. Google’s decision to conform Gemini to a far-left viewpoint was partially tied, in other words, to United States and United Kingdom policy.
To thwart such coordination efforts in the future, the House of Representatives should use its oversight powers to bring them to public attention. The resulting litigation and public backlash could discourage such collusion in the future. Transparency efforts, and a broader reform agenda, will enable developers to create better (and more accurate) forms of artificial intelligence.
Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images