From Reading Men to Predictive Machines: AI, Tripolar Nuclear Rivalry, and the Future of Deterrence

Mark Kennedy

December 10, 2025

Speech as Delivered to Strategic Deterrent Coalition - Falls Church, Virginia

A B-2 Spirit stealth bomber, labeled Spirit of Oklahoma, parked on an airbase tarmac under a clear blue sky, with its landing gear exposed and wheel chocks in place.

It is an honor to address the Strategic Deterrent Coalition. Your commitment to informed public dialogue strengthens the most consequential mission in national security.

  1. Why WISC Exists

Let me begin by anchoring today’s discussion in the mission of the Wahba Initiative for Strategic Competition, now at New York University.

At WISC, everything we do comes back to one core purpose—simple to say, enormously complex to achieve:

Ensure that Xi Jinping wakes up every morning, looks across the Pacific, and decides that today is not the day to start a conflict with the United States.

Not because he trusts us.
Not because he admires us.
But because when he runs the calculation—however he runs it—the costs look too high, the risks too great, and the chances of success too low.

Deterrence is, at its core, a decision not taken—a war that never begins.

The military instinctively thinks of the nuclear triad when they think of deterrence.

When you consider the factors Xi is weighing, it is nuclear and more.

Deterrence today in great part rests on our technological edge.

A hard truth is that the United States is on a trajectory that risks surrendering the technological edge that underwrites every other pillar of deterrence.

America does not appear to recognize that we are in a race and need to start running in all the theaters of competition.

In the first theater – inventing the best technologies – all of today’s tech giants were born from federally funded university research, yet we are cutting the very research budgets that would create tomorrow’s champions. The CEOs of NVIDIA, Microsoft and Google and much of their talent are foreign born, yet we are become less inviting to the world’s best talent.

In the second theater of deploying new discoveries, we must recognize that China does not need to invent better AI. Its advantage is scale. Graduating twice as many electrical engineering PhDs allows it to adopt technologies faster. It deploys robots at twelve times the U.S. rate after adjusting for income.

China manufactures nearly twice what we do. Applying across the board tariffs on everyone and everything is a two-edge sword. If may lead to marginally more onshoring, but it keeps America from anchoring a competing manufacturing ecosystem.

You can’t compete globally when your components are higher cost due to tariffs. It took a year to recognize that tariffs on coffee and bananas only cost consumers more. We continue to put tariffs on the equivalents of coffee and bananas in every industry.

The third theater is installing the infrastructure through with digital flows occur globally. America may have gotten its closest allies not to use Huawei, but Huawei has installed 70% of the telecommunication backbone in Africa and has about a 50% market share outside the U.S. alliance. The Chinese government’s logistics software is spread around the world, its cranes are in most ports, and it has ownership interest in over a hundred ports outside China. It literally gives away scanners.

Beijing understands that whoever wires the world’s infrastructure shapes how the world communicates, trades, and aligns. America and its allies are fast becoming small islands surrounded by 80% of the world’s land mass and population wired by China. The administration’s focus on promoting AI stacks is a important small step, but much more needs to be done.

The fourth theater is retaining one’s digital sovereignty. Xi is committed to not being dependent on U.S. technology, turning away NVIDIA chips, while America remains dangerously dependent on Chinese components for all things digital, pharmaceuticals, not to mention rare earths, critical minerals, batteries, solar and more.

This takes on more urgency as physical AI in the form of driverless cars, drone deliveries and the ability to control more things from your smart phone advances. What you can control from your phone can be controlled by others.

Today as Xi considers whether he should act, he sees:

  • supply chains he can coerce,
  • ports and sea lanes he can obstruct,
  • industrial capacity he can outlast,
  • a growing number of devices that can be remotely controlled, and
  • alliances America itself is fracturing

WISC is focused on all these vulnerabilities that need to be address if we are truly to deter aggression. But let us for the moment focus on nuclear deterrence.

  1. What Deterrence Used to Be: Knowing Who Was in the Room

During the Cold War, deterrence had a strange, perilous simplicity.

We faced one nuclear peer: the Soviet Union.

Success depended on understanding a small group of human decision-makers. Kremlinologists studied:

  • whom Brezhnev trusted,
  • what Andropov feared,
  • who influenced Chernenko,
  • how Soviet leaders perceived risk.

We were reading people—their psychology, incentives, and red lines.

And we had guardrails:

  • a functioning hotline,
  • structured arms-control talks,
  • predictable signaling,
  • mutual recognition of catastrophic consequences.

And still we came far too close.

On September 26, 1983, a Soviet satellite misread sunlight on clouds as five U.S. missiles. Lieutenant Colonel Stanislav Petrov sat at the console. Protocol required him to report the alert—potentially triggering a retaliatory strike.

He paused.
He trusted his judgment.
He overrode the machine.

One human judgment separated a false alarm from nuclear war.

That was the Cold War—dangerous, unstable—but at least we knew who was in the room.

III. The New Reality: Three Nuclear Giants, One Unpredictable North Korea… and Machines We Cannot See

Today’s nuclear landscape is more complex than at any time in history.

  1. A Tripolar Nuclear System

We now face three major nuclear powers with expanding arsenals and diverging doctrines:

  • the United States,
  • Russia,
  • China.

Add North Korea—an unpredictable nuclear state with brittle command systems—and we are living through humanity’s first tripolar nuclear rivalry, plus a wild card.

  1. The Erosion of U.S. Extended Deterrence in Asia

China’s rapid nuclear modernization—particularly submarine-launched missiles that can now reach the U.S. mainland from Chinese waters—raises uncomfortable questions.

For the first time in decades, allies are quietly asking:

Would Washington risk Los Angeles or San Francisco to defend Tokyo or Seoul?

Many believe it is now not a question of whether, but when Japan and South Korea pursue independent nuclear options.

These are not emotional reactions. They are structural responses to:

  • China’s emerging second-strike capability,
  • North Korea’s survivable nuclear forces,
  • a more transactional U.S. strategic posture,
  • and a weakening nonproliferation regime.

Even as nuclear risks multiply, the guardrails have shrunk.

  1. Fewer Guardrails With China Than With the Soviets

With China today, there is:

  • no nuclear hotline,
  • no nuclear risk-reduction framework,
  • minimal military-to-military communication,
  • no shared escalation protocols.

We have fewer guardrails with Beijing than we had with Moscow during the Cuban Missile Crisis.

  1. New Actors in the Room: AI Models

Adding further to the risk, the nuclear decision-space includes non-human participants—AI models that shape what leaders see, how they interpret it, and how quickly they must act.

We used to study Politburo psychology.
Now we must understand model psychology:

  • What data trains China’s early-warning AI?
  • Could a Russian targeting model misclassify a conventional bomber as nuclear-capable?
  • What happens when an algorithm—not a human aide—prepares the morning briefing?
  • How do we deter an AI model’s hallucination?

You may be able to deter a leader.
You cannot deter an algorithm.

Yet these systems increasingly influence human leaders—especially under crisis pressure.

  1. How AI and Quantum Are Reshaping Nuclear Risk

Tech leadership is now battlefield-shaping.
In this era, failing to adopt advanced technologies—quickly, securely, and at scale—is failing to deter.

One point cannot be stressed enough: technological leadership and technological adoption now directly condition nuclear stability.
A nation that does not modernize its sensors, networks, logistics, and nuclear command-and-control systems at speed will be outpaced by adversaries using predictive analytics, automated exploitation tools, and AI-driven deception.

In the nuclear age, falling behind technologically is not merely an economic challenge.
It becomes a deterrence problem.

Consider the operational ways AI and quantum technologies are altering the strategic environment.

  1. AI Beyond Decision-Making

AI is no longer confined to analysis or decision-support. It is now embedded across:

  • sensing,
  • target identification,
  • communication security,
  • cyber intrusion detection, and
  • NC3 vulnerability mapping.
  1. AI accelerates early warning—perhaps too much

AI can fuse radar, infrared, space, and cyber data at machine speed.
But speed cuts both ways:

  • false positives propagate instantly,
  • deception becomes easier,
  • ambiguity multiplies.

Machine speed can clarify—or catastrophically distort.

  1. AI blurs the line between nuclear and conventional

AI may not distinguish dual-use nodes.
A conventional strike could be misread as a nuclear decapitation attempt—compressing decision time and raising escalation risk.

  1. AI exposes leakage in NC3

Nuclear command, control, and communications depend on concealment and redundancy.
AI can detect:

  • electromagnetic anomalies,
  • timing patterns in encrypted traffic,
  • undersea cable vulnerabilities,
  • satellite irregularities.

In NC3, visibility is destabilizing.

  1. AI threatens communication integrity

Deepfakes could mimic:

  • heads of state,
  • combatant commanders,
  • allied leaders.

In a crisis, a synthetic order could be catastrophic.

  1. AI introduces “model risk”

Models trained on flawed or adversarial data could:

  • hallucinate missile attacks,
  • misinterpret clutter,
  • misjudge cyber intrusions,
  • recommend premature action.

The Biden–Xi statement that “humans, not AI,” should control nuclear decisions—hailed by some headlines as a breakthrough—should be understood as symbolic, not operational.
It created:

  • no treaty,
  • no verification,
  • no limits on AI in sensing, targeting, attribution, or warning,
  • and no safety protocols for NC3.

Those who only read headlines may think guardrails exist.
They do not.

  1. Quantum Technologies

Quantum is reshaping the strategic environment as profoundly as AI.

  1. Quantum computing threatens today’s encryption

Adversaries are already harvesting encrypted data—waiting to decrypt it later.
This includes:

  • NC3 messages,
  • allied coordination,
  • missile defense data,
  • early-warning telemetry.

We must migrate to post-quantum cryptography before—not after—the breakthrough arrives.

  1. Quantum sensing could erode submarine stealth

Quantum magnetometers may one day detect:

  • submarine patrol routes,
  • underwater communication relays,
  • mobile nuclear assets.

Even partial detection shifts strategic calculations.

  1. Quantum timing reveals hidden anomalies

Ultra-precise quantum clocks can expose timing irregularities indicating cyber compromise or NC3 leakage.

  1. Hypersonic Weapons

Hypersonics compress warning time, confuse tracking systems, and cannot be distinguished as conventional or nuclear.
They magnify the misinterpretation risks only now at machine speed.

  1. Missile Defense and the “Golden Dome” Concept

A nationwide defensive shield like the “Golden Dome” would require:

  • AI-driven tracking,
  • space-based sensing,
  • quantum-secure communication,
  • autonomous engagement.

Missile defense is stabilizing when it protects populations.
It becomes destabilizing when adversaries fear it threatens their second-strike capability.

China and Russia would likely respond with:

  • more warheads,
  • faster and more maneuverable missiles,
  • greater reliance on AI-accelerated warning systems.
  1. The RAND Warning: Modern Fog of War

The risks AI introduces are not theoretical.

A recent RAND wargame imagined a coordinated AI-enabled cyberattack rippling across the United States. Autonomous systems malfunctioned. Transportation ground to a halt. Power flickered. Emergency services failed. Communications fractured.
Air and sea traffic stopped.

And because AI was embedded across both civilian and military systems, attribution collapsed. Was it China? Russia? A rogue non-state actor? Or had an AI model misinterpreted routine signals?

Leaders faced the most dangerous condition in crisis management:

Deciding under extreme ambiguity.

RAND warns that AI could create “wonder weapons”—not superweapons, but destabilizers—tools that erode confidence in second-strike survivability, deceive early-warning systems, or inject false data at machine speed.

This is not science fiction.
This is the modern fog of war.
It underscores your mission: strengthening the ecosystem that makes miscalculation less likely.

  1. Lessons from History in the AI Era

Three Cold War incidents highlight the danger of misinterpretation:

  • The 1983 Able Archer NATO exercise that was nearly misread as a first strike.
  • Petrov’s false alarm where a human saved the world from machine error.
  • The 1995 Norwegian scientific rocket launch that nearly triggered escalation.

Now imagine similar events with:

  • deepfakes,
  • spoofed telemetry,
  • AI-generated target recommendations,
  • compromised communications,
  • and no hotline with Beijing.

The risks multiply.

VII. The World Sees the Danger — but Cannot Agree on Guardrails

The UN General Assembly recently adopted a resolution warning about the risks of integrating AI into nuclear command, control, and communications. China supported it; the United States voted against it. Washington opposed it because the resolution is unverifiable and politically one-sided—the kind of measure that would constrain transparent democracies while leaving opaque authoritarian systems free to advance AI-enabled capabilities out of sight.

Meanwhile, nearly sixty nations have endorsed the U.S.-led Political Declaration on Responsible Military Use of AI. China and Russia have not—because meaningful standards would limit the very AI applications they see as essential to offset U.S. advantages.

The result is structural gridlock. One side rejects unverifiable UN constraints; the other rejects responsible-use norms. And given what is at stake—strategic advantage, second-strike survivability, and the tempo of military innovation—true consensus is unlikely.

So today:

  • no treaty on AI and nuclear weapons,
  • no agreement on AI in early warning or NC3,
  • no protocols for AI-driven misinterpretation,
  • no mechanism for AI-related crisis management.

We have fewer guardrails now than we had with the Soviets.

VIII. Principles for Stability in the AI–Quantum Era

There are many steps we must take to continue to effectively deter.

  1. Five Principles for Nuclear Stability
  2. A human must always remain in the loop for nuclear decisions.
  3. Accelerate post-quantum security for all critical communications.
  4. Ensure transparent, testable, auditable AI in military systems—never black boxes.
  5. Develop cross-domain redundancy across space, cyber, ports, and supply chains.
  6. Establish crisis communication channels among all three nuclear giants.

At WISC, deterrence is not a silo; it is an ecosystem.

  1. Five Principles for the Broader Strategic Tech Race
  2. Restore U.S. research leadership with greater federal investment in university R&D and STEM talent pipelines, while welcoming the world’s best talent.
  3. Deploy technology faster than China, not merely invent it, but also rapidly adopt in defense, manufacturing, logistics, and communications.
  4. Build trusted global infrastructure around the world with allies—ports, cables, telecommunications routers, data centers, cloud zones, logistics platforms.
  5. Delink critical systems from Chinese hardware and software, especially IoT, physical AI, and cloud infrastructure.
  6. Align the National Defense Strategy with alliance realities—technology must strengthen alliances, not inadvertently fracture them. We must recognize that America has not since it is founding faced a rival more its peer – economically, militarily, diplomatically. Never have America needed allies more. We should call them to do more, yet we must treat them as allies, not as pawns.

Together, these ten principles form an ecosystem approach to deterrence in the age of predictive machines.

  1. Conclusion: Back to Xi’s Morning Decision

Let me end where I began.

Deterrence is the daily decision by an adversary to choose peace.

In the 20th century, that decision depended on:

  • human judgment,
  • clear communication,
  • predictable red lines.

In the 21st century it depends on:

  • AI models we cannot see,
  • hypersonics we cannot track,
  • quantum threats we must anticipate,
  • communication channels we do not have,
  • supply chains we must harden,
  • infrastructure we must secure,
  • alliances we must reinforce,
  • and American leadership in the technologies that underpin every dimension of national power.

If we build that resilient ecosystem—technological, industrial, economic, military, and allied—and keep it coherent, then every morning Xi, Putin, Kim Jong Un, and any future rival will run their calculations, perhaps with assistance from their own models, and conclude the same thing:

“Not today.”

That is the real work of deterrence.
That is the mission of WISC.
And that is why your work matters so profoundly.

Thank you. I look forward to our discussion.

Author

Mark Kennedy

Director & Senior Fellow