Matt Mittelsteadt
At the recent AI Action Summit, Vice President JD Vance delivered a barnburner debut on the international policy stage. In his fifteen-minute address, the vice president distilled Trump-era AI policy: colored by optimism for AI potential, a dismissal of AI safety regulation, a muscular resistance to European rules, pro-worker policy, and the explicit sense that AI will be a potent tool to deter American adversaries.
The stand against European regulators and AI safety regulations made the speech an instant hit in free-market circles. Vice President Vance is right to call out such overregulation and its problematic consequences. However, the speech’s overall nationalist tone and countless overlooked market intervention dog whistles should raise concerns. For example, in the speech, Vance explicitly promises to center labor unions in AI policy decisions—inviting the possibility of automation halting red tape—and claims “the Trump administration will ensure that the most powerful AI systems are built in the US with American-designed and manufactured chips”—a gesture at large-scale tech tariffs and industrial policy. A pure hands-off approach, this is not.
Vance’s most concerning assertion was that:
“[T]he Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens’ right to free speech.”
This statement should set off alarm bells. In recent years, there have been countless Republican-led legislative attempts to steer digital content in the name of combating “conservative censorship.” While the Supreme Court struck these bills down and affirmed the right to editorial discretion for digital platforms, these impulses live on and are now being applied to artificial intelligence.
Despite Vance’s free speech overtures, “to ensure” AI models are free from ideology indicates intervention. If Vance wants to stamp out what he views as bias, he would need either an explicit AI content regulatory regime or a significant pressure campaign to compel Silicon Valley to follow his party line.
While some might claim such market intervention clashes with the primary thesis of the speech, worry over the administration’s interventionist impulses is indeed well founded. On February 18, President Trump announced his administration would proceed with semiconductor tariffs—a truly earthshaking, explicit tech market intervention. This move on tariffs demonstrates that while AI safety intervention specifically might be off the table, intervention in general is still ago.
The Consequences of AI Ideological Purity Tests
Even if one has sympathy for Vance’s desire to stamp out AI bias, any direct rules or efforts to influence Silicon Valley design would carry powerful consequences.
1. Freedom
The most immediate consequence would be the impact on freedom. By restricting what AI models can generate, the administration would limit individuals’ ability to build what they want and interact with AI on their own terms.
Across ideological divides, the AI marketplace is awash with intentionally biased systems. In Lucerne, Switzerland, an AI-powered Jesus, naturally biased towards the Christian faith, was spun up to chat with curious parishioners. Meanwhile, on CharacterAI, one can find a Karl Marx bot who assuredly has a lot to say in support of communism. One can disagree with either of these bots, but people have a right to make them.
Such intentional biases also exist in leading models. For instance, xAI’s Grok3 expresses deliberate bias against legacy media while pointing users to x.com for news. OpenAI’s ChatGPT is also biased, albeit more subtly, with ideological choices outlined in their public “Model Spec.” Despite statements that OpenAI’s technology will “assume an objective point of view,” the avoidance of racial stereotypes and the confident condemnation of genocide as an evil are widely accepted, yet still ideological choices.
It is a fact that all AI models will be biased in some way and that true “objectivity” cannot exist. The best way to prevent an AI information monoculture and ensure that AI systems trend towards good information is not through strict government standards but through a strong diversity of market choices. Because organizations like OpenAI and xAI are free to compete on ideological lines, consumers now enjoy the freedom to choose the model that fits their views and even compare model responses to uncover biases.
2. Chilling Innovation
Demanding ideological orthodoxy would also chill innovation. Today, creative risks and fast model releases color the AI market and have enabled rapid discovery, learning, and course correction. Under any “objectivity” mandate, releases will slow, as any model not vetted for political correctness would be a liability.
This would sacrifice the open market’s powerful learning opportunities. In 2016, Microsoft released Tay, an AI chatbot that quickly descended into generating offensive content due to Microsoft’s decision to enable dynamic learning through interactions with Twitter users. Tay was a PR debacle, but it was also a learning experience. Within 48 hours, Tay was pulled offline, and engineers quickly internalized the stark (and now obvious) lesson: don’t let Twitter users dictate your AI’s design. This was both innovation and the market at work. Since then, safeguards and best practices have developed to avoid a Tay part 2.
While hopefully mistakes as glaring as Tay are behind us, future missteps will happen. Only by enabling continuous iteration can we ensure such mistakes are increasingly rare and increasingly low stakes. Under government pressure, however, such improvement will take a back seat to concern that a release might step on someone’s toes.
3. Market Access and Global Competitiveness
Perhaps the most serious long-term risk of government-imposed AI ideology is its impact on international perceptions. If overt political influence poisons American AI models, they risk being seen not as tools but as instruments of US government propaganda.
We already see this phenomenon with China’s DeepSeek R1, an AI model whose cloud release is rife with predictable government restrictions on topics like Tiananmen Square. While DeepSeek’s release could have been a pure story of technical achievement, Xi Jinping’s political taint saddled the company with immediate global skepticism and even country-wide bans.
Deepseek should serve as a stark warning: liberalism has market value. If American technology reflects the political preferences of its ruling party, global trust in US AI innovation will erode and markets will be lost.
We are already seeing early signs of this concern. Recently in the Financial Times, a European contributor wrote that entanglements between US tech firms and the Trump administration pose “a direct threat to European sovereignty and value.” While I’d like to handwave this as par for the course with longstanding EU-US tech policy disagreements, I worry this reflects a growing global distrust of politicized American technology, a distrust that could easily spread beyond Europe to markets with longstanding anti-imperialist traditions.
For continued American innovation and success, foreign markets are essential. If our AI models become a tool of propaganda, no one will want to use them, and America will fall behind.
The Continued Need for Non-Intervention
A heavily regulatory approach to AI policy under Trump is not inevitable, yet it is concerningly possible given the anti-tech and pro-industrial policy pushed.
Just because the administration criticized European AI regulations and rescinded President Biden’s AI executive order does not mean his administration’s approach won’t consider its own problematic regulations of these important technologies. Four years is a long time, and AI policy is still in its formative stages, and regulatory intervention could have consequences that change the trajectory or eliminate beneficial uses along with harms.
For those who value freedom, innovation, and global competitiveness, the message is clear: stay vigilant. The regulatory trajectory of AI in the US is far from settled and the consequences could be profound.
