why the pentagon is learning how to start a war with a.i.
Foreign policy professionals are turning to AI for advice. That's not the scary part. The scary part is how they'll get that advice.
Before going to war or making monumental decisions that affected their empires, the emperors and their advisers frequently consulted oracles and prophets. From ancient Greece, to the glorious golden age of Rome, to the empires of the fertile crescent and the Americas, there was always a priest or a soothsayer who tried to commune with forces beyond our understanding and either predict the future, or some tactical shred of knowledge for their benefactors.
We don’t even have to go back that far. Hell, the Reagans kept an astrologer on staff for almost his entire presidency for that reason, hiring her after a failed assassination attempt by John Hinckley Jr. to set Ronald Reagan’s schedule and advise him on way too many decisions for comfort.
It’s kind of fitting that now, we’re reviving the old tradition but with math and code, as foreign policy experts at the Pentagon — and their counterparts in Russia, China, the UK, France, and Germany, without a doubt — are studying how they can use chatbots to give practical advice on handling geopolitical quagmires like the war in Ukraine.
Now, I can already hear you typing that LLMs are just stochastic parrots which under all the hype are just n-dimensional matrixes that map the relationship of words in the text on which they’re trained, and when they respond, they just try to predict what the next word or phrase should be on the fly. Which is true. We can get into the design of an LLM in more detail, but overall, this is effectively what Claude, or ChatGPT, or their competitors like Llama, DeepSeek, and Gemini, are under the hood.
If staffers who have the power to authorize the release of enough nuclear warheads to end the world ten times over were just typing “other country bad, what do?” into their prompt boxes, we should all worry. But these are professionals who understand pretty much exactly what’s going on, and so they’re doing something far more scientific, and I would argue, a lot more dangerous.
how to build the brain of a real life skynet
One of the more interesting findings in their research is that some chatbots tend to be more bloodthirsty than others, and are avoidant of conflicts with China and Russia but eager for one with NATO powers. There are some very interesting things to unpack in that result and what it says about the kind of data they’re gulping up from the web on their relentless crawls for more information, and the prevalence of anti-NATO, Russian disinformation a lot of Americans are addicted to, but we’re going to stay on task.
Obviously, this is a huge problem. Let’s say China decides to invade Taiwan — just as it’s been threatening to do for many years now and may be eyeing an attack next year — and in the opening salvos of the campaign, a volley of DF-26 ballistic missiles rains down on U.S. naval assets in Guam as a warning to stay out of the conflict.
The absolute last thing we would want the War Decider 9000.5, Enterprise Edition, to do is reply to this development with “okay, yes, but… have you considered they might have a valid point?” while demanding London be vaporized in nuclear hellfire because the prime minister’s office posted a painfully polite yet stern criticism of U.S. tariff and trade policies with too many u’s for its liking.
Which brings us to the scary and dangerous part of this experiment, which is summed up by a quote from a fellow of a defense think tank hinting that the goal is to train AIs to "align them with your institutional approach." Basically, they want an AI that will tell them what they want to hear coached in language they can use to justify whatever the preconceived course of action may have been.
This follows the current pattern of the wealthy and powerful less interested in AI that’s actually smart, strategic, and honest, but a digital yes man who’ll give them cover by being a virtual entity whose inner workings would be difficult to scrutinize. Of course, figuring how they came to the conclusions they did is entirely possible, just really time consuming and requires expert analysis with specialized tools. But the journalists who are asking questions are not going to have the training or access to these tools.
from impartial adviser to digital henchman
As a millennial, I remember the run-up to the invasion of Iraq. Hellbent on settling their old scores with Saddam Hussein, the Bush 2.0 team filled with Bush 1.0 veterans tried to sell a narrative in which Iraq had weapons of mass destruction was and planning to either use them or sell them to terrorists. Spoiler alert: no WMDs were ever found and instead of “being greeted as liberators” we got civil wars and ISIS.
Now imagine Colin Powell and Donald Rumsfeld telling the UN that a powerful AI built to simulate every possible scenario told them with 99% confidence that Saddam was in fact building WMDs for al Qaeda, and who are you to argue with the cold math and unbiased logic of a dispassionate machine? Oh, you have questions how this AI came up with that conclusion and why it was so confident? When did you get your graduate degree in computer science?
This is the real danger of LLMs and overhyped claims about AI. It’s not that AI will one day take over the world and enslave us all. It doesn’t even have to because it’s already kind of taken over our society, but that’s a different conversation.
No, it’s that wealthy, powerful, and very determined people are trying to use it to bend the world to their whims and use the output of these AIs as an inscrutable cover from questions and criticism by claiming that AI is just too damn smart for any of us to ever understand, and oh, would you look at that, the AI just said that we need to give them more money and power, and not hold them accountable for the consequences of their actions. What a weird coinkidink indeed.
And if this is so irresponsible with basic economics, just imagine how dangerous that would be when it comes to war. Terminator’s Skynet became self-aware and rebelled against humanity trying to shut it down. A real life version could decide to immolate a hemisphere because a megalomaniacal think tank with the wrong president’s ear told it to do so at the slightest provocation, then forced the military to connect its nuclear launch systems to an AI trained on utter lunacy.
Forget cold, unfeeling machines logically deciding that we were too much of a liability and deciding to wipe us out. What should keep us awake at night is people training AI to be a lot like us so it can be their pet judge, jury, and executioner.