GZERO WORLD with Ian Bremmer
AI Goes to War
5/8/2026 | 26m 46sVideo has Closed Captions
Inside the Pentagon's AI push and the risks of machine-led warfare
The Pentagon has poured billions into AI warfare, from target identification to autonomous weapons. Bloomberg reporter Katrina Manson, author of Project Maven, joins Ian Bremmer to discuss the promises and pitfalls of AI on the battlefield.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...
GZERO WORLD with Ian Bremmer
AI Goes to War
5/8/2026 | 26m 46sVideo has Closed Captions
The Pentagon has poured billions into AI warfare, from target identification to autonomous weapons. Bloomberg reporter Katrina Manson, author of Project Maven, joins Ian Bremmer to discuss the promises and pitfalls of AI on the battlefield.
Problems playing video? | Closed Captioning Feedback
How to Watch GZERO WORLD with Ian Bremmer
GZERO WORLD with Ian Bremmer is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, LG TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipOne of the claims for AI is that it will save civilian harm.
But some advocates of AI warfare who have come back from the front line in Ukraine, they have said to me, "I hate to admit it, but AI atrocities are possible in this new era."
Hello and welcome to GZERO World.
I'm Ian Bremmer and today we are looking inside the Pentagon's AI war machine.
The Department of Defense, or the Department of War as Donald Trump would call it, is the U.S.
government's largest bureaucracy by personnel, with roughly 3 million employees.
Not long ago, AI faced very committed resistance there.
That all started to change in 2017 with Project Maven, a public-private effort to bring AI into U.S.
military operations.
Fast forward to today, and military commanders are using it to identify targets everywhere from Iran to Yemen to Venezuela.
So what happened?
And what guardrails exist to keep AI warfare from making costly mistakes or going off the rails entirely?
Joining me, Bloomberg correspondent Katrina Manson, whose new book tells the story of Project Maven and the determined colonel who was behind it.
Don't worry, I've also got your puppet regime.
Hello, is this the Strait of Hormuz?
- Welcome to the helpline.
Please note, our menu has changed.
Give me an operator, please.
But first, a word from the folks who help us keep the lights on.
Funding for GZERO World is provided by our lead sponsor, Prologis.
Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at Prologis.com.
And by Cox is proud to support GZERO.
The planet needs all of us.
At Cox, we're working to seed the future of sustainable agriculture and reduce plastic waste.
Together, we can work to create a better future.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities.
And... Guns don't kill people, people kill people.
You've no doubt heard that old NRA slogan many times before, but what if it's wrong?
I'm not being political here, that's somebody else's show.
I mean, what if it is literally wrong?
What if the gun itself walks up to a target, aims and... (phone ringing) What's that?
Did somebody order a dozen roses?
But I'm not talking about science fiction, I'm talking about artificial intelligence and its fast growing role in the US military.
We are living through an AI boom and that enthusiasm has found its way inside the Pentagon.
By its own admission, the Defense Department has embraced an AI-first approach to warfare, publicly allocating at least $75 billion to AI-driven programs since 2016, though likely quite a bit more in classified efforts.
That spending goes into everything from massive data analysis, to surveillance programs, to developing swarms of autonomous jet skis.
Yes.
Jet skis that can strike targets with limited human input.
Much of that money has gone to defense contractors like Palantir, which posted a fourth quarter revenue in 2025 of nearly one and a half billion dollars, up 70% year over year.
And when it comes to US adversaries like Russia and China, it's hard to say exactly how much they're investing in AI warfare, but a conservative estimate would put it well into the billions of dollars.
In short, we're living in the midst of an AI space race, but the target isn't the moon, it's each other.
And that's what many AI experts are worried about.
In the fog of war, split-second decisions can mean the difference between life and death.
And if the information you're receiving isn't just wrong, but persuasively wrong, the results could be tragic.
Sure, humans make mistakes, but AI could potentially make many more, much faster and at scale.
And while weaponized AI systems will keep improving, the more you remove humans from the kill chain, as it's called, easier it becomes to remove humanity from the equation as well.
This is not hypothetical.
The Pentagon is reportedly using AI to generate hundreds of strike options in Iran, locating them, ranking their priority and assessing their legal viability.
There's evidence that Anthropic's Claude tool was used in the military raid in Venezuela that captured President Nicolas Maduro.
And Israel has come under scrutiny for using AI to prepare strikes in Gaza that have resulted in thousands of civilian casualties.
There's little stopping such technology from being used here at home either.
Back in April, the Homeland Security Department awarded Palantir a $30 million contract to build a system backed by artificial intelligence that would help find and track individuals for deportation.
The more this technology is refined in war, the easier it is to treat everyone on or off the battlefield as potential targets.
The Pentagon's most high-profile AI system, the MAVEN Smart System, is the result of a decade of cooperation between the Defense Department and the tech industry.
My guest today, Bloomberg correspondent Katrina Manson, is out with a new book on Project MAVEN, and she joins me now.
Katrina Manson, great to have you on the show.
Thank you.
We talk about AI a lot on the show, but not as much about the military uses.
And Project MAVEN is, my understanding, kind of the overall US strategy for integrating AI into war fighting in this country, yes?
It's become that.
- It's become that.
Was it not intended to be that to begin with?
At the very beginning in 2017, it was very narrow, narrowly scoped in public, just to bring what is called computer vision to analyze video drone footage and find what was on it, identify objects on video footage through machine learning, through algorithms.
Before that, they had humans looking, but they didn't have enough humans looking.
So they found out they were looking at maybe 4% of the entire drone footage.
And so they wanted machines just to take on that work.
That was the official way in which MAVEN was presented, not only to the public, but also to the Pentagon workforce.
So the initial aims were actually much broader than just bringing computer vision to drones.
It was, I learned through the process of researching this book, to bring AI and put it at the heart of how America makes war.
And it was always seen by MAVEN's backers as a stepping stone to autonomy, to actually removing humans ultimately from the loop or bringing humans and machines together, not just for computer vision, but for multiple different types of intelligence and combat operations.
One of the other key things that Project MAVEN was trying to do was bring in Silicon Valley.
So for years, the Pentagon was depending on the big defense primes, Lockheed Martin, Boeing, all of these ones that people know.
And they needed, they felt, cutting edge AI.
And that meant Silicon Valley.
The chief of Project Maven was a Marine Corps colonel named Drew Kukor, whose story I tell.
He wanted Google DeepMind.
And he didn't get them.
So then he moved to what was told to me, was described as Team B at Google, which was Google Cloud.
They were just getting going.
At the time, Project Maven came to them, I think they only had four customers.
So they were much more willing to work with the Pentagon.
And they started trying to really bring Drew Kukor's vision to life, which was one, to identify objects, but also he had this vision to create almost Google Earth for war.
Google Earth was already used regularly by military operators, but the platform itself hadn't been adapted.
And that's what he wanted.
He also went to multiple AI startups.
Some of the best brains were running, would you believe it, a company that worked with a wedding blog.
So, they were using their computer vision algorithms to identify the tiers on a wedding cake, bridal veils, the suits of a groom.
And Drew Kukor, he got on the Amtrak, he came down to New York, and said, "I need you to work on war.
I believe your algorithms can save lives."
And he made this pitch to this company, clarify, and they started working with the Pentagon.
So, I mean, before we talk about things that are concerning about the integration of AI into warfighting, I mean, to be clear, it's almost inconceivable that you wouldn't use AI to help the Defense Department when you're using it for literally everything else.
I mean, the idea, I've heard for decades that there's too much information that is collected that humans can't possibly sift through at all.
It's an overwhelming task to understand, you know, how to track down a terrorist, how to understand what threats to the United States might actually be.
Those vectors, you're gonna wanna use compute, right?
So, I mean, how much of what initiated this project was something in your view that makes a lot of sense?
And that if the Americans didn't do, others were going to that are adversaries?
The Department of Defense likes to say they've been using AI for 60 years.
So whatever AI was back then, they were trying to adopt it.
The main problem that AI was trying to solve in 2017 or so was this problem of too much data.
So it wasn't just-- Which was a real problem.
A real problem for them.
Drew Kukor himself had been deployed to Afghanistan in 2001, soon after 9/11, and had found that as an intelligence officer, he could not get information to operators on the front lines who were very soon suffering the consequences of improvised explosive devices.
And nobody was tracking sufficiently data that might help them figure out where the next IED might be laid.
Eventually what they did was they brought in data analytics over the coming years.
But until then they were relying on PowerPoint.
They were sometimes logging incidents just on paper.
They would create wheels, circles on the wall like pizza slices and try and work out chunks of time when attacks happen and then start correlating it even with the moon because it turned out that the Taliban was laying these bombs according to the weather systems.
Now data collection, bringing that information together, logging which house was which, began to be the beginnings of what the U.S.
military saw as this possibility that data could help them plan better for war.
And that was beginning to happen before Project Maven.
That didn't need AI.
But AI needed compute, as you mentioned.
It needed cloud, which didn't exist at the time Project Maven started going.
And it needed really good data, accurately labeled data.
No one had tried to do that before.
In fact, for the first few years of Project Maven, they're going around different commands, different units, asking for data being shown cupboards with old footage that they can't even figure out how to translate.
So there's been this enormous effort to digitize the US military that's still underway.
- Now, when I think about companies, there's a lot of resistance to use AI, senior management, the military is one of the world's largest bureaucracies.
How effective has rolling AI out across the Department of Defense actually been over the course of the past decade?
Well, at the beginning of Project Maven, the team trying to bring AI to this large bureaucracy felt like they were fighting a rearguard action, an insurgency inside that very big office.
And they encountered resistance really everywhere.
They couldn't get the services even to play with them, never mind to start funding this effort.
They were constantly experiencing military operators who said, "I don't need this stuff.
I don't trust this stuff.
I have my way of doing targeting," for example, one of the most critical and consequential decisions that the U.S.
military makes.
And they didn't want to go near AI.
It's begun to change, but I would say there's still enormous concern and debate inside the Pentagon.
And even for Drew Kukor, who was leading it, he was told, he told me, that he was told he could fire whoever he wanted in order to get this done.
He always felt that getting AI into the department would be a knife fight, and it would depend on adoption, testing, and really getting it out into hot wars, doing something very controversial, in order to incrementally improve it and adapt it to what people actually needed.
Now, when we look back at the original desire, which is integrating all the data that's coming and being sure that when you have intelligence, you can actually assess it, would the Defense Department now say that with AI, they are able to assess the intelligence that's coming in or is this a never-ending problem?
Some US agencies are even producing intelligence documents solely using AI that no human eyes ever look at.
So they are very proud of this effort in the past few years to dive exactly into this process.
There are multiple ways in which AI isn't satisfying even them, never mind the detractors who are worried about it.
The computer vision still doesn't work right.
They haven't figured out how to link together sensors and shooters.
But this is the big, all-encompassing project of what the Defense Department is moving towards.
And MAVEN is at the heart of that effort.
What is AI doing today in warfighting in the United States that people would be surprised about?
The one that surprised me the most, this is a narrow project, but I discovered one called Whiplash.
This is a program to put AI into automatic target recognition and into autonomous navigation on something that America makes a lot of that China doesn't, jet skis.
This is autonomous jet ski robots armed with explosives.
And the idea, the concept behind it is that once you have an autonomous weaponized vehicle, you don't need to worry about jamming.
So if you don't have your communications link at risk, it, if it's reliable, can go and find a target and execute against that target.
The scenario in mind for that is the defense of Taiwan.
So if China ever decides to make an invasion attempt of Taiwan, and if the U.S.
ever decides to defend Taiwan, that could be the sort of vehicle that might help stave off an invasion over a short period of time.
- These are usable presently?
These are in production.
In the book, I report that the CIA smuggled some very rudimentary versions of them to Ukraine in support of Ukraine.
And one, I understand, jet ski armed with explosives washed up on the shores of Turkey.
And this sparked consternation inside the Pentagon that their scheme had been discovered.
In Navy budget documents, I found that Whiplash is in low-rate production, and there is an effort to expand what the U.S.
is doing currently in all sorts of autonomous drones.
There's a new project starting in 2026 to make voice-controlled autonomous drone swarming tech.
This is the idea that a commander could say something like "left," and then a group of drones would take on board that instruction.
It would be translated using an LLM, and the drones would then be able to move as a swarming group.
Now, my understanding is that as of today, what this technology is doing for targeting is it's helping to identify targets, it's making recommendations, a human being is giving the go-no-go, and then the weapons system is deployed.
Is that correct with like these jet skis, for example, is that the way it works?
- With the jet skis, I'd say that's under development, but with Maven's smart system, where that process you explain is happening, AI is being used in two ways.
You've got the computer vision helping to select or identify what's there, but that's not the only source that the US is relying on.
More than 179 different data feeds feed into Maven's smart system.
And AI is being used to crunch through that data and find overlapping information.
They have started integrating LLMs, large language models.
Claude, of course, from Anthropic has been key to this.
The very basic way to think of the targeting process is find, fix, finish.
It's more complicated than that, but it means find something, figure out where it is, and then go shoot at it.
LLMs, I was told, were helping to do the find and fix part of that cycle.
There are also some decision-making cycles.
The 18th Airborne Corps, I was given an unclassified demonstration of how this works, where humans are present at six points in their decision-making cycle.
With the help of AI, they took humans out of three of those places on that cycle.
And in one of the remaining ones, the humans were what's called on the loop rather than in the loop.
So they could supervise-- - They supervise, but they're not making the actual decision.
They could intervene if they need to.
Exactly.
- But they're not making the decision.
And this is clearly the way war fighting is going, right?
Too many targets, too much going on, this strategy being done by human beings.
But actual deployment and decisions are increasingly being made by AI.
Is that correct?
The decisions, I think the US military would strongly fight back and say the commanders are making the decisions.
But the process that is underlying that is going so fast.
And the accessibility to the operator, to what's behind the decisions, risks becoming obfuscated.
Of course, the people who believe in this system say to me, humans make mistakes all the time in war, and the machine can't be worse than a human.
We just don't know yet.
At the moment, computer vision regularly doesn't do as well as a human, but the humans then select from the information that's put in front of the human, and it is clearly enabling the U.S.
military to do things much faster and on a much broader scale.
It will come down to the evaluation.
It will come down to the detail.
Can you really rely on that AI?
We know that when you rely on AI, you're also relying on it to make mistakes.
That is intrinsic to the very nature of AI.
So weeding out those constant errors, working out how you cope with its susceptibility to sycophancy to escalation to bias and hallucination, all of those elements, and then having to worry about whether the human will over trust AI.
All of that the US military is aware of, but they do not yet have sufficient fixes for.
Where does the United States and the leadership of the military, where do they believe they are compared to the Chinese in deploying AI in warfighting?
They don't make public their real intelligence assessment.
My read on it is that the US military thinks they are further ahead at adoption, at practicing the workflows and at actual combat experience.
And part of AI is about technology, but the other part of AI in warfare is about having operators practiced with it, seeing when it goes wrong.
For example, when the U.S.
started using AI in support of Ukraine in 2022 after Russia invaded, the algorithms didn't work.
They couldn't recognize tanks in the snow because they had been trained on a completely different environment in the desert.
Now those lessons that the US military learned from seeing what goes wrong with algorithms when the circumstance changes has allowed them to create a faster system of updating the algorithms.
They trained the algorithms overnight.
They went and collected more satellite data.
They took photos of that line of tanks along the road to Kiev.
And using that extra data started to create algorithms that rose up in their ability to actually identify anything from something like 30% back up and up and up to start to become more useful.
Those lessons, the US hopes, China hasn't sufficiently learned or if they're aware of them, haven't been able to practice.
And it may be down to practice when you actually are in a real wartime situation.
Now, do we think that this is going to rapidly take human beings out of actual war fighting?
I mean, the standing armies and navies for all of these major militaries around the world are massive, and there are real costs to that.
We see autonomous drones and ground vehicles in the front lines in Ukraine, and not as many soldiers that are engaged in the fighting.
When you talk to the US leadership on this, do they see that as-- is that a goal, and/or is that a reality?
It's clearly a goal to save the lives of U.S.
operators.
It's also a recognition that the wars of the future that the U.S.
may be involved in may not have popular support, the sort of support that allows you to fight a long war.
So it's about both those things, political will at home and the technical capability in saving lives.
One of the risks of saving your US operators and ensuring that friendly fire is also reduced is that wars can then put more scope on civilian populations.
One of the claims for AI is that it will save civilian harm.
But when I've spoken to some advocates of AI warfare who have come back from the frontline in Ukraine, they have said to me, "I hate to admit it, but AI atrocities are possible in this new era."
And so understanding how AI then gets repurposed, if your target is not a human combat operator, where is the impact of your firepower going to be felt most?
And so I think that focus of wars as they become, of course, as the US was predicting, more urban.
If there is a US-China scenario ever, that will clearly be naval.
And so it very much depends on the scenario.
In a naval scenario, if AI makes a mistake, the argument is regularly made to me that the stakes are much lower.
If they miss, they just hit water.
In an urban scenario, if you miss, you hit civilians.
Katrina Manson, great to have you on the show.
Thank you.
And now to Puppet Regime, where the Strait of Hormuz may be closed, but the Hormuz Helpline is open for your calls.
Roll that tape.
Hello, is this the Strait of Hormuz?
- Welcome to the Helpline.
Please note, our menu has changed.
Give me an operator, please.
- Press 1 for Chinese, 2 for Farsi.
- Strait of Hormuz Helpline, what's the password?
Life of Showgirl123!
- No sir, that's not the password.
Oh, maybe it's cathair555 with @ symbol?
[laughs] - 4 for Urdu.
What the hell is that?
- Please hold.
Hold?
Do you know who you're talking to?
- President Xi?
I'm so sorry.
Please go ahead, sir.
- Damn right.
Oh, you wanna play hardball?
How about I put you on hold so that you take me off hold?
- Invalid selection.
Reset my credentials?
But why?
- Your account was flagged for login from a suspicious area.
I only went to Israel for like two days, man.
- Sorry, gotta talk to my supervisor.
What?
You are so on hold right now.
- Invalid selection.
Hello, this is Macron.
I can help you open the strait.
[laughs] You know, I know I wrote it down here somewhere.
- Are you just trolling here to drag this whole thing out?
You know, I'm shocked to hear you say that.
Shocked.
- All right, I've had enough.
Put me on with your strongest manager right now or else.
- All of our Ayatollahs are currently dead or assisting other customers, but our A-Ayatollah is happy to help.
All right, you know what?
This is taking too long.
Marco, get me the number for Cuba.
Maybe we'll try the Cuba.
That's our show this week.
Come back next week.
And if you like what you've seen, or even if you don't, but you're worried that an autonomous drone will take you out if you complain, you are right to be concerned.
So instead, just check us out at GZEROmedia.com.
[MUSIC PLAYING] Funding for GZERO World is provided by our lead sponsor, Prologis.
Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint and scale their supply chains.
With a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at Prologis.com.
And by Cox is proud to support GZERO.
The planet needs all of us.
At Cox, we're working to seed the future of sustainable agriculture and reduce plastic waste.
Together, we can work to create a better future.
Cox, a family of businesses.
Additional funding provided by Carnegie Corporation of New York, Koo and Patricia Yuen, committed to bridging cultural differences in our communities.
And...

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

Today's top journalists discuss Washington's current political events and public affairs.












Support for PBS provided by:
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS. The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided...