In Iran war, AI and drones are outpacing global rules of war

The ongoing conflict in the Middle East has brought artificial intelligence and the technology behind unmanned aerial vehicles, or drones, into sharp focus. The U.S. military is using the most advanced AI it has ever used in warfare, with Anthropic’s Claude AI reported to be assessing intelligence, identifying targets, and simulating battle scenarios, even as the Pentagon said it would terminate its contract with the company over a disagreement about its use. 

Iran has launched thousands of drones across the Persian Gulf that have hit civilian, commercial, and military targets, upending global oil supplies and grounding thousands of aircraft in one of the busiest transport hubs in the world. These cheaply made and easily deployed UAVs are currently operated by pilots by remote control, but as AI becomes more integrated into militaries, the advancements will become even more pronounced with “unpredictable, risky, and lethal consequences,” Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace think tank, told Rest of World

The biggest role that AI now has in U.S. military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said.

“My concern is that untested systems with high degrees of lethality will be relied upon and can potentially lead to catastrophic results — e.g., strikes on civilian structures like hospitals and schools,” Feldstein said. “Additionally, I’m concerned that human accountability will be deemphasized, meaning that human operators will only have a limited means to ensure targeting recommendations are accurate before giving assent to proceed. This will harm accountability and lessen command and control oversight for militaries.”

Do we have the right rules in place and accountability norms to handle the exponential growing use of these tools? My answer would be no.”Steven Feldstein, senior fellow at the Carnegie Endowment for International Peace

Drones are increasingly seen in conflicts worldwide, from Lebanon to Myanmar to Sudan. Iran developed the Shahed attack drones, which are used extensively by Russia in Ukraine, while Ukraine produced some 4.5 million drones last year alone. Turkey, Israel, and the United Arab Emirates also make UAVs; China is a major producer of not just UAVs, but also uncrewed ground vehicles and underwater drones.

The low cost of UAV production and easy access to off-the-shelf models that can cost as little as $2,000, or be assembled with a 3D printer, means they are also used by non-state actors, including criminal gangs. The next generation of drones is expected to be AI-enhanced, capable of autonomous navigation and precision targeting. 

These inexpensive, commercially available tools can undermine even the most advanced military systems, accelerating a shift toward “forever wars,” the Institute for Economics and Peace think tank said in its most recent Global Peace Index. “Technological innovation, particularly in drone warfare and AI, is making conflict more accessible and more asymmetric — and also more difficult to resolve.”

AI has long been used to analyze satellite imagery and guide missile-defense systems, but the use of chatbots such as Claude in decision-support systems is new. There is no clarity yet on how accurate these systems are and how they make decisions. In a recent study, AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95% of cases. Lavender, an AI-powered database used by Israel to analyze surveillance data and identify potential targets in Gaza, was wrong at least 10% of the time, resulting in thousands of civilian casualties.

That is not stopping countries from rushing to integrate AI into military systems. China is prototyping AI capabilities that can pilot unmanned combat vehicles, detect and respond to cyberattacks, and identify and strike targets on land, at sea, and in space, researchers at Georgetown University’s Center for Security and Emerging Technology said. While the U.S. has declared Anthropic a supply-chain risk, the Chinese army “is fostering an ecosystem for rapid AI development that connects novel research with frontline operations,” they said.

Militaries are not yet relying entirely on automated systems, but they are also not solely based on human operators, Feldstein said. And while AI systems for fully autonomous weapons — a red line for Anthropic – are yet to be developed, there’s a growing gap between deployment capabilities and governance, he said.

“Do we have the right rules in place and accountability norms to handle the exponential growing use of these tools? My answer would be no,” Feldstein said, pointing to Ukraine President Volodymyr Zelenskyy’s address at the United Nations in September on AI-powered weapons. Zelenskyy warned that AI had triggered the “most destructive arms race in human history,” and made a plea for urgent global rules on how AI can be used in weapons.

“Sadly, I’m not convinced other leaders have taken his warning to heart,” Feldstein said. “I think we are setting ourselves up for major problems in this arena in the coming years.”

Hot this week

Anne Hathaway’s New Song “Burial” Does Not Feature Burial

Anne Hathaway’s new song “Burial” has nothing to do...

X revamps Creator Subscriptions with new features, like exclusive threads and shareable cards

Elon Musk-owned X announced on Thursday that it’s revamping...

Interpol reveal new live drummer for upcoming tour dates

Interpol have revealed that drummer Urian Hackney is set...

Buy Dazed Magazine | Dazed

Welcome to the Culture Clash issue of Dazed –...

Topics

spot_img

Related Articles

Popular Categories

spot_imgspot_img