5.8 C
Munich
Sunday, December 22, 2024

Why military ‘drone swarms’ raise ethical concerns in future wars

Must read

As researchers apply artificial intelligence and autonomy to lethal aerial machines, their systems pose new questions about how much humans will remain in control of modern combat.

Intelligent “drone swarms” would represent a breakthrough in warfare. Rather than soldiers piloting individual uncrewed vehicles, they could deploy air and seaborne swarms to cooperate on missions “with limited need for human attention and control,” according to a recent U.S. government report.

Why We Wrote This

Artificial intelligence-powered drone technology could eventually change warfare. But the autonomy of lethal machines raises serious ethical dilemmas around how, and whether, to regulate development, deployment, and use of AI.

The question going forward is whether the Pentagon can overcome the many technological challenges of drone warfare while also maintaining the ethics of a democratic state. There are fears that adversaries may exploit their own swarm technology in future conflicts, without ethical constraints.

How much human oversight is necessary or desirable is a key question. Humans, after all, don’t process information as quickly as machines, which may increase pressure to take humans out of the loop in order to stay competitive in battle.

“We need more people thinking about them in the context of the military, in the context of international law, in the context of ethics,” says Margaret E. Kosal, a former science and technology adviser at the Defense Department.

The proliferation of cheap drones in conflicts in Ukraine and the Middle East has sparked a scramble to perfect uncrewed vehicles that can plan and work together on the battlefield. 

These next-generation, intelligent “swarms” would represent a breakthrough in warfare. Rather than soldiers piloting individual uncrewed vehicles, they could deploy air and seaborne swarms on missions “with limited need for human attention and control,” according to a recent U.S. government report. It’s the “holy grail” for the military, says Samuel Bendett, an adviser to the Center for Naval Analysis, a federally funded research and development center. 

It’s also an ethical minefield. As researchers apply artificial intelligence and autonomy to lethal machines, their systems raise the specter of drone armies and pose new questions about the role human control should play in modern combat. And while Pentagon officials have long promised that humans will always be “in the loop” when it comes to decisions to kill, the Defense Department last year updated its guidance to address AI autonomy in weapons. 

Why We Wrote This

Artificial intelligence-powered drone technology could eventually change warfare. But the autonomy of lethal machines raises serious ethical dilemmas around how, and whether, to regulate development, deployment, and use of AI.

“It’s a very high level of approval to even proceed with testing of a fully autonomous weapons system,” says Duane T. Davis, a senior lecturer in the computer science department at the Naval Postgraduate School in Monterey, California. But it does “provide for the possibility of completely autonomous weapons systems.”  

That’s largely because much U.S. military research is driven by fears of how adversaries may exploit their own swarm technology in a future conflict with the United States or its allies. The question going forward is whether the Pentagon can overcome the myriad technological challenges of drone warfare while also maintaining the ethics of a democratic state.

The concern is that China “is not going to wrestle with these same ethical decisions in the way that we will,” says Dr. Davis. 

What makes a swarm

Current instances of uncrewed military group attacks over battlefields – as well as the drone light shows now popping up as entertainment in night skies over the U.S. – are not intelligent swarms. The former are essentially salvos of slow-moving aerial “missiles,” each one operated by a human, with no machine-to-machine coordination or communication. The latter – a high-tech alternative to fireworks – are preprogrammed displays in near-ideal conditions, which aren’t particularly useful in a military setting, since an adversary can figure out how to counter them.

“For an enemy, that just means I’ve got a pattern of things I can shoot at, or they’re operating similarly, so it’s easier to predict what they’re going to do,” notes Bryan Clark, senior fellow at Hudson Institute. 

Swarms instead use an array of sensors to communicate drone to drone – and then switch to AI to plan and collaborate for attacks on the fly. They’re programmed to create a siege of overwhelming force from “a bunch of different angles – the way ants crawl all over a beetle, or whatever, to eat it,” says Zachary Kallenborn, a fellow at George Mason University’s Schar School of Policy and Government.

The artificial intelligence-enabled drone from Swarmer flies in the region near Kyiv, Ukraine, June 27, 2024. U.S. defense planners say the use of drones in “swarms” that rely on AI to complete their missions would be a breakthrough that raises ethical questions about reduced human control of combat.

A big challenge for current drone operators on Ukraine’s battlefields is Russian jamming technology, which can prevent operator-drone and, thus, drone-to-drone communication. To address this challenge, some researchers are working on ways for drones to observe and infer what other drones are doing. 

The fog of war complicates visual observation. That’s why Theodore Pavlic of Arizona State University recently began studying weaver ants in Australia at the behest of U.S. Special Operations Command. As the ants swarm and transport their prey up trees, they sense each other’s presence without constantly looking around. 

They also cooperate and make decisions as a team. “If we can replicate that [with drones], you can basically hit go, and they will plan their own way,” says Dr. Pavlic, who also studies stingless bees and other types of ants. “If new challenges occur, then they can [set] temporary short-term goals to get around those challenges.” 

Bang for the buck

Building smart drones, with more onboard intelligence and computing power, means bigger and more expensive machines, and that has a downside. “Computers can only be so small, and you can only put so much power and payload onto a drone,” says Nisar Ahmed, director of the Research and Engineering Center for Unmanned Vehicles at the University of Colorado Boulder. 

Just for a drone to take off, for starters, requires roughly 10 times the energy that a world-class sprinter expends to run a 100-meter race, says Vijay Kumar, dean of the University of Pennsylvania’s engineering school. The result: Missions with aerial drones are currently limited in terms of distance and time. Since longer-range drones are expensive, cheaper drones that can stay aloft for an hour – or even 30 minutes – offer more bang for the buck.

Despite the challenges, researchers are making progress. Red Cat Holdings, a drone technology company in Puerto Rico, announced last year a system in which one person could operate four of its Teal drones, as opposed to today’s 1-1 ratio. The company aims to increase that ratio by pushing even more autonomy onto the machines themselves.

Embedding such autonomy in lethal machines, however, also poses ethical challenges about maintaining human oversight – particularly as the speed and complexity of drone decision-making increases. Humans, after all, don’t process information as quickly as machines, which may increase pressure to take humans out of the loop if, say, China or another adversary deploys AI-equipped drones capable of full autonomy.

The Pentagon hired an ethics officer in 2020 to grapple with precisely such challenges. Still, “We need more people thinking about them in the context of the military, in the context of international law, in the context of ethics,” says Margaret E. Kosal, a professor at the Sam Nunn School of International Affairs at the Georgia Institute of Technology and former science and technology adviser at the Defense Department. 

A machine gun analogy

What is clear is that the technology will continue to develop at breakneck speed, even as researchers wrestle with challenges specific to the battlefield of the day. Drones will change war the way the machine gun did more than a century ago, says George Matus, chief technology officer of Red Cat and founder of its Teal subsidiary. 

“Back then, a handful of gunners could defeat large numbers of even the mightiest cavalry. [Sometimes, even] today, a handful of drones can defeat a battalion of the mightiest armored vehicles before they even reach the front line.” In the future, intelligent swarms will prove even more effective, he adds.

While many researchers worry the technology is one more step toward all-out swarm warfare, Mr. Matus embraces the vision.

“The front line is going to become majority automated, if not fully automated,” he says. “There’s no doubt in my mind at least for the next couple of decades, this is going to be a very large part of the future of war.”

Others see it as an evolutionary step with more limited battlefield applications. “It is not fundamentally going to be a revolution in military affairs,” says Dr. Kosal. “That doesn’t mean we shouldn’t be worried.”

Read More

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article