Skip to main content
Scroll For More
listen   &   read

Michael Richardson | Military AI is Even Worse Than You Think

Headshot of Michael Richardson

So just because AI makes war feel more scientific and high-tech doesn't make the violence any less brutal and terrible. In fact, it can make the violence worse because it distances decision-makers from suffering and because it provides an excuse for action and defers responsibility...

Michael Richardson

Associate Professor of Media at UNSW, Michael Richardson examines how technology, culture, and power shape knowledge in war, security and surveillance. He warns that AI’s rapid deployment can lead to catastrophic outcomes in warfare, where algorithms determine lethal targets based on biased data and predictive analytics. Listen as Michael calls for a global resistance against militarised AI, and the need for an ethical standard in technology, as the consequences of these advancements could redefine the very nature of warfare and humanity itself.

Presented as part of The Ethics Centre's Festival of Dangerous Ideas, supported by UNSW Sydney.

Transcript

Michael Richardson: Remember Mark Zuckerberg's famous maxim for Facebook, move fast and break things? That's what's happening with AI right now. ChatGPT gets unleashed into classrooms and workplaces without much thought for what it will mean. Image generators like MidJourney are making journalism harder and graphic design more generic and boring.

Deepfakes are shaking up the American presidential election. Elon Musk even shared one recently, although the way Elon's going maybe we shouldn't be too surprised about that.

Laughter

If that sounds bad, imagine AI on the battlefield.

Imagine the future of warfare. What do you see? Is it a robot soldier? Maybe a robotic dog with a machine gun mounted on its back? Maybe a swarm of exploding micro drones like something out of Black Mirror? Scary, yeah, but what if the future of war is already here and it looks like your laptop? What if it works more like ChatGPT than the Terminator? So, let's picture this military AI plugged into a massive surveillance system. It hoovers up data, everything from mobile phone location and family relationships to social media and search info.

It uses all of this to predict who will become a threat, where an attack will take place, where enemy troops will move next. It decides who should be a target for lethal action and who should not. And it doesn't just decide who should die, but also places a specific value on that death, a value whose currency is the lives of civilians, or what militaries call collateral damage.

It does all this using statistics to identify relationships within the data and create probabilities about what might happen next. Even when it reports on the evidence it used, no one can explain for sure how or why it made its decisions. And the machine isn't telling and it couldn't explain itself even if it wanted to. It simply analyzes the data and creates targets.

Now if you're not feeling a little bit horrified, you really should be. And if you're not worried, you haven't been paying attention.

Having spent years researching emerging military technologies and how they're transforming warfare, I can tell you that we urgently need people around the world to say no. And we need to do it right now.

Now you might be thinking, aren't AI systems more accurate by making war more scientific? Don't they make it more precise and more humane? In Gaza, the Israeli Defense Force is already using artificial intelligence systems to identify targets. One's called Gospel, another Lavender, and there's another one called Where's Daddy.

Laughter

According to reports, these AIs leverage years of mass surveillance of Palestinian people to propose targets for military action. They do this at great speed, generating decisions and producing new targets much faster than human analysts could, sifting through the same intelligence data. In practice, what that means is more targets, more bombs, more civilian deaths, and less accountability.

Israel's not alone. In recent years, Silicon Valley has all but enlisted in the US military. Companies like Palantir make data analysis and AI systems for the CIA, the Pentagon, Customs and Border Patrol, and so on. Upstarts like Anduril are developing AI-driven battlefield management platforms and autonomous drones. And while behemoths like Microsoft, Amazon, and Google don't make things that actually drop bombs, that we know of anyway, they supply the compute power, the data centers, and the AI expertise that makes military artificial intelligence possible.

So, these kinds of technologies are often very different to the killer robots you might imagine. Arnie is not coming back from the future to stop the T-1000, folks. But they are far more deadly and dangerous because they're already here and likely to stay if we, people around the world, don't push back.

Here in Australia, state and federal governments are eagerly kick-starting an arms industry that specializes in high-tech exports like drones and artificial intelligence. Every state has strategies to spur local military industries. Starved of funding, universities, including mine, provide research and development that helps military AI race forward.

Now as far as we know, none of these AI systems or software platforms being built here in Australia actually make lethal decisions autonomously, for now. But they do make recommendations, backed up by probabilities of accuracy and risk. In techno-lingo, today's military AI systems almost always have humans somewhere in the loop.

So, the system says, here's the target, here's the probability it's a threat, here's the risk of acting or not acting. Then a person, the human in the loop, decides whether to issue the lethal order. But the problem with these systems is that they actually exploit our humanity and our tendency to trust technology, to believe statistics, and especially in war, to prefer action over inaction. We know this because we've seen it.

In the US, a bunch of courts implemented an algorithmic system called Compass to provide advice to judges about the risk of re-offending. Because judges in criminal courts are inundated with cases, they tended to accept what Compass recommended and sentenced accordingly.

So, one example is Brisha Borden, an 18-year-old black girl charged with theft for riding a scooter a couple of blocks, who was rated a much higher risk than Vernon Prater, a 41-year-old white man charged with shoplifting at a hardware store, even though he'd done five years for armed robbery. Borden didn't re-offend. The white guy robbed an electronics warehouse.

Not only was the system biased, even racist, it wasn't accurate. But this didn't stop judges accepting its findings and using them directly in sentencing. When AI systems are used for policing, they rely on historical crime data to predict where future crimes will take place.

That historical data is skewed by the fact that police have always targeted marginalized communities, black communities in the United States, indigenous communities here in Australia. So, AI will predict more crime in historically over-policed communities because all that policing has produced lots of data that says there's lots of crime. Those same communities are targeted further, more crimes are identified, more data is created, and the cycle continues.

Now, these are law and order examples, so you might be thinking that they don't apply to you. But we're actually surrounded by systems like this. Like, who among us hasn't watched an episode or two of Bridgerton just because Netflix said it was for you?

So, if we go back to those examples of Gospel and Lavender, the target recommendation systems used by Israel, you can see why AI isn't reigning in war or making it more humane or even more accurate.

By proposing more targets more quickly, AI is actually expanding the genocide to more and more of Gaza. It helps the IDF bomb more hospitals, schools, universities, and homes, not less. It adds more and more people to kill lists so that so many that human oversight goes out the window.

Reporting tells us that recommendations from these systems are being followed without any scrutiny or questioning by the humans in the loop. One source told the Israeli investigative magazine 972 that, quote, at 5 a.m. the Air Force would come and bomb all the houses that we had marked. We took out thousands of people. We didn't go through them one by one. We put everything into automated systems, and as soon as one of the marked individuals was at home, he immediately became a target.

So just because AI makes war feel more scientific and high-tech doesn't make the violence any less brutal and terrible. In fact, it can make the violence worse because it distances decision-makers from suffering and because it provides an excuse for action and defers responsibility away from the people who are meant to exercise judgment. Now, these questions of the effect of AI recommendations on human decision-makers are important, but we should also be very concerned about how military AI makes its decisions. Decisions require knowledge, and so for an AI system, we need to ask how it knows and what it knows and how it uses that information.

Now, knowing things about the enemy is fundamental to warfare, but different technologies shape what and how we know. When you look through binoculars, you see things far away, but the view is narrow. You get distant detail, but you lose the wider context.

If you want that context back, you just take the binoculars off, but AI isn't like that. Understanding how an AI system makes and shapes knowledge is harder. First, what data is being used to build and train the models that power the system? Second, how do the models actually work? Third, how are they actually being implemented? Adding to the challenge of understanding this, military technologies are typically sealed up behind layers of secrecy and national security legislation.

So, let's start with the data sets, the inputs into the system. As we know from investigations into things like credit scores and predictive policing, the data that feed AI systems are plagued by bias and racism and sexism and other forms of discrimination, the things that are all around us today. If you consider all or even many Palestinians to be potential terrorists, that shapes the data that's collected, how it's interpreted, and how the system itself is trained.

So, the AI you build with that data can't escape fundamental biases and injustices. But military data has another problem. In many cases, there isn't enough data to train the AI in the first place.

For example, when the US military developed systems for drone warfare, it doesn't have enough images or video of the many varied and often creative ways that opponents might attack bases or other targets. So, it creates synthetic versions. It uses video game software to invent different types of threats, and it uses automated tools to create video and still imagery, as well as other data, like smartphone metadata, to train the system.

So how can we trust these AI tools that make life or death decisions, or even recommendations, that are built on discriminatory and synthetic data? And even if we could improve the data, we should be skeptical about the way these systems work. They do their analysis inside a technological black box that we humans can't open and understand. Neural networks use hidden layers of processing that even their makers can't explain in detail.

Different machine learning techniques work in different ways, but they all share a fundamentally similar approach. They turn the world into statistical data that can be processed by machines. They know and decide by numbers.

And that's radically different from how you or I perceive the world. We understand it through our own rich experiences. We feel as we make judgments, and we feel the consequences of what we decide.

Do we want to decide who lives and dies using statistics, even in war? I hope I'm not alone in finding that idea beyond the pale. People are impressed by AI because it's really good at something we're pretty bad at, calculations with big numbers. But statistical pictures of the world are always reductive, they're always less rich and contextual than the way we humans experience things.

Right now, we're seeing AI hype all around us, and militaries are as susceptible to anyone. That's at a fever pitch at the moment, and that's a big part of the problem. It used to be that militaries drove technological change.

Think of the internet or digital computers. But now it's big tech changing the way militaries buy, develop, and use technology. Silicon Valley and tech companies around the world push experimental technologies into the field and worry about their impacts later. Move fast and break things. Think about what that might mean when AI war is unleashed.

AI technologies are transforming how we know the world and increasingly shaping what we do with that knowledge. In some cases, that's exciting, and benefits could be huge. But when the way AI knows and acts in the world means more destruction and death, then I think we should reject that in the strongest possible terms.

To make a better future, we need a collective resistance to AI technologies, especially at war. We have to expose the dangers. We have to demand better from our leaders. We have to make AI an issue in our communities and at the ballot box, not just because it matters for education or work or science, but because it matters for the future of life.

Centre of Ideas: Thank you for listening. This event is presented by the Festival of Dangerous Ideas and supported by UNSW Sydney. For more information, visit unswcentreforideas.com and don't forget to subscribe wherever you get your podcasts.

 

Speakers
Headshot of Michael Richardson

Michael Richardson

Michael Richardson is a writer, researcher, and teacher living and working on Gadigal and Bidjigal country. He is an Associate Professor in Media and Culture at UNSW Sydney, where he co-directs the Media Futures Hub and the Autonomous Media Lab, and an Associate Investigator with the ARC Centre of Excellence on Automated Decision-Making + Society. His research examines technology, power, witnessing, trauma, and affect in contexts of war, security, and surveillance. His latest book is Nonhuman Witnessing: War, Climate, and Data After the End of the World.