Without empowering universities and outcompeting authoritarian models abroad, we’re leaving key runners at the starting line.
This post is part of my Mind the Gap series, which examines critical disconnects between strategic ambitions and the tools we are giving ourselves to achieve them.
The Trump Administration’s newly unveiled AI Action Plan represents a significant step forward in America’s effort to lead in the global race for artificial intelligence. It takes both defensive and offensive measures—tightening export controls to slow authoritarian advances, strengthening alliances to shape global AI norms, and investing in the infrastructure—chips, fabs, and data centers—needed to compete at scale.
Several of these priorities closely track the recommendations I offered when the Administration sought input in shaping this action plan:
- Export Controls – Preserve core safeguards against authoritarian military AI use while avoiding overreach that alienates allies.
- Trusted Alliances – Build coalitions to share AI tools, coordinate governance frameworks, and lead standards development.
- Semiconductor Resilience – Expand domestic manufacturing and secure critical minerals to prevent chokepoints.
- Infrastructure Build-Out – Scale national compute capacity and modernize the grid to power it reliably.
- Data as a Strategic Asset: Builds AI-ready datasets and expands secure access to federal data while protecting privacy.
- Talent Development: Expands AI literacy, workforce training, and skilled trades pipelines but does not address the urgent need to grow PhD-level and university-based AI talent.
These are significant, forward-leaning moves that strengthen America’s hand in strategic competition. The Administration’s plan focuses on removing barriers that could slow our drive to win the AI race. Yet it incorporates only part of my recommended safeguards to keep AI under human control and mitigate societal risks. And while it eases constraints, the plan leaves two critical accelerators underdeveloped: empowering U.S. universities with the compute resources they need to lead, and outcompeting authoritarian AI models in the global marketplace.
Universities Built America’s AI Edge
The U.S. would not be an AI leader today without the foundational contributions of its research universities. Academic labs have been the birthplace of many of the breakthroughs that underpin current AI systems:
- The transformer architecture—the basis for modern large language models like GPT—originated from research at the University of Toronto and Google Brain, itself built on deep learning work first incubated in university settings.
- ImageNet, which catalyzed the deep learning revolution, was created at Princetonand Stanford.
- BERT, a major leap in natural language processing, built directly on open research traditions fostered at top universities.
- Advances in reinforcement learning, now powering robotics and complex decision-making systems, have been led by teams at leading universities like UC Berkeley, Carnegie Mellon, and MIT.
Beyond individual breakthroughs, universities provide something no corporate lab can match:
- An open, collaborative ecosystem that trains future AI leaders—including ethicists, policymakers, and cross-disciplinary innovators.
- Long-horizon research that pursues bold, high-risk ideas unlikely to attract short-term corporate funding.
- Market challengers to today’s dominant players, keeping the AI ecosystem competitive and diverse.
The Missing Accelerator: University Compute Power
Despite their central role, universities face mounting barriers to conducting cutting-edge AI research—chief among them, access to top-tier compute infrastructure. Training state-of-the-art models can require millions of dollars in compute time, easily absorbed by big tech companies but prohibitive for most academic labs.
When I was president of a flagship research university, I saw firsthand the importance of the difficulty of attaining the compute power necessary to pursue groundbreaking research.
Private–public partnerships and industry consortiums help, but if they remain the only source of academic compute, three dangers emerge:
- Access risk – Partnerships can dissolve with market shifts.
- Research capture – Corporate priorities may overshadow public-interest research.
- Openness erosion – Proprietary agreements can restrict transparency.
A National Academic Compute Fund
The U.S. should launch a National Academic Compute Fund to:
- Provide secure, federated access to universities nationwide.
- Prioritize open science, model transparency, and AI safety research.
- Link U.S. research institutions with allied networks (G7, Quad, etc.) to strengthen shared capacity.
- Support training and upskilling for the next generation of AI leaders.
In the 21st century, academic compute is as strategic as aircraft carriers or semiconductor fabs—and it deserves the same sustained public investment.
The Global Leadership Gap: A Digital Marshall Plan for AI
The other major omission in the AI Action Plan is a strategy to outcompeteauthoritarian AI models in the fastest-growing markets—emerging and developing economies.
AI is more than a lever of economic and military power. It is a platform for shaping the future—driving prosperity, building stability, and reinforcing democratic resilience. Yet China’s Digital Silk Road is rapidly wiring the developing world with AI-powered surveillance, censorship, and control—embedding authoritarian models into the core of partner nations’ governance systems. Once those standards are entrenched, reversing them will be far harder, if not impossible.
The U.S. cannot afford to sit back while Beijing builds a global sphere of influence one fiber-optic cable, data center, and AI platform at a time. We need the equivalent of a Digital Marshall Plan—a bold, sustained effort to provide the democratic alternative:
- Finance Trusted Infrastructure – Use the DFC, Ex-Im Bank, and public–private partnerships to help partner nations build secure AI-ready infrastructure.
- Build Global Talent Pipelines – Make U.S. universities and companies the premier destinations for AI training, certification, and research collaboration.
- Deploy AI for Global Good – Lead in applying AI to healthcare, agriculture, climate resilience, and logistics—delivering tangible benefits authoritarian models can’t match.
- Protect Digital Freedoms – Offer open, democratic AI tools that safeguard privacy, transparency, and free expression.
- Celebrate Cultural and Linguistic Diversity – Ensure AI supports, rather than erases, local languages and cultural heritage, building trust and relevance.
This is not charity—it is strategic competition. In the 20th century, America’s Marshall Plan rebuilt free nations after war. In the 21st, a Digital Marshall Plan can ensure the AI-powered world is one where freedom, not authoritarian control, defines the future.
The Strategic Imperative
The Trump plan moves the ball forward in important ways—both shoring up defenses and making offensive plays. But without equipping universities to lead and without mounting a global campaign to outcompete authoritarian AI models, America risks falling short.
Winning the AI race isn’t just about faster chips—it’s about who gets to use them, and how their benefits are shared. Democracies cannot afford to leave their universities on the sidelines or cede the developing world to authoritarian AI.
The U.S. must act with urgency—aligning innovation, diplomacy, and development—to ensure AI secures both American leadership and a future where technology advances freedom and prosperity worldwide.
Author
Mark Kennedy
WISC Director, DRI Senior Fellow