Search form

Newsletters

Artificial Intelligence and the Rule of War

Helmut sorge | Posted : April 03, 2020

His delicate, almost floating touches created a hitherto unseen aesthetic perfection: the enigmatic face the Mona Lisa. The mysterious woman, captured on a panel of poplar wood, is the work of a genius, possibly one of the greatest minds in human history: Leonardo da Vinci.

But Da Vinci also had another obsession: the art of war. In a letter to the court of Ludovico Sforza, then ruler of Milan, the celebrated maestro wrote, “I will make covered vehicles, safe and unassailable which will penetrate the enemy and their artillery, and there is no host of armed men so great that they would not break through it… I have also types of cannon… with which to hurl small stones almost like a hail storm; and the smoke from the cannon will instill a great fear in the enemy on account of the grave damage and confusion.”

Da Vinci even designed a mechanical knight. The papers and drawings of his robot were displayed at a Louvre exhibition from October 2019 until February of this year—the transcendental Mona Lisa, shown in the Louvre since 1797, and the automaton under one roof, two elements symbolizing our world: the dazzling portrait and the frightening element of war.

“LIKE A METEORITE, LIKE A FIREBALL

Dictators, kings, emperors, field marshals, and scientists have always sought the perfect weapons of their time, ignoring the consequences for the people and societies the weapons are used on. In December 2019, Vladimir Putin shocked the world with his announcement that Russia was now in possession of a hypersonic glide vehicle, which can fly at 27 times the speed of sound, and heads towards targets “like a meteorite, like a fireball.”

The Russian leader declared that this weapon, the Avangard, was invulnerable to current defense systems, and that no other country possesses a hypersonic weapon, let alone a hypersonic weapon with intercontinental range. In these days of horrendous global economic disruption and mobilization of survival systems, the struggle for dominance might have disappeared from the front pages of newspapers, but the advances in war technology continue, and artificial intelligence (AI) in particular challenges the world powers.

“ON THE EDGE OF A NEW FUTURE”

Cyborg or android soldiers, quasi-human beings that are themselves weapons, and robots rejecting human control, pushed by their algorithm driven vision to destroy the world, were part of the fantasies of science fiction authors decades ago, but science fiction has caught up with reality. In 2017, Vladimir Putin recognized that “whoever becomes the leader in this sphere will become the ruler of the world”, a statement Chinese leader Xi Jinping agrees with. He intends to transform his nation by 2030 into a superpower of AI. The United States, meanwhile, has established the headquarters of the Army Future Command in Austin, Texas. “Modern advancements in artificial intelligence, machine image recognition and robotics,” noted the New York Times, “have poised some of the world’s largest militaries on the edge of a new future, when weapon systems may find and kill people on the battle field without human involvement.”

Britain, Israel, and others are already using weapons with autonomous characteristics: missiles and drones that can seek out enemy targets and attack without a human command triggering the immediate decision to fire. Christian Brose, Senior Fellow at the Carnegie Endowment for International Peace wrote in a paper for the Aspen Strategy Group that, “the emergence of [such] technologies is so disruptive that they overtake existing military concepts and capabilities and necessitate a rethinking of how, with what, and by whom war is waged. Such a revolution is unfolding today.” AI, autonomous systems, ubiquitous sensors, advanced manufacturing and quantum science will transform warfare, predicts Brose. According to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, “One of the ways we are seeing warfare evolve is people being pushed back from the edge of the battlefield—not just physically but cognitively, as more and more decisions are being made by these systems operating at machine speed.”

In November 2018, the New York Times reported that a company based in Estonia, Milrem Robotics, had developed a war-ready robot, THeMIS, a Tracked Hybrid Modular Infantry System. It consists of a mobile body mounted on a small tank treads, topped with a turret that can be equipped with a small or large caliber machine gun. The robot system includes cameras and target-tracking software, and can pursue people or objects as programmed. “The components are there for a robot that can interpret what it sees, identify likely combatants and target them, all on its own,” the New York Times observed. Da Vinci’s vision had become reality.

“SURRENDERING INTELLIGENCE AND EXPERIENCE”

Terry Cerri, a scientist formerly at the US Army Training and Doctrine Command, said “Imagine that we are fighting in a city and we have a foe that is using human life indiscriminately as a human shield… You can’t deal with every situation; you are going to make a mistake, unlike autonomous weapons.” Cerri insisted: “A robot, operating with milliseconds, looking at data you can’t even begin to conceive, is going to say, this is the riposte time to use this kind of weapon to limit collateral damage.” Pentagon planners are convinced that AI is likely to prove useful in improving soldiers’ situational awareness on the battlefield and the ability of commanders to take decisions and communicate orders. AI can process more information than humans, and can do it more quickly, making AI a useful tool for assessing chaotic battles in real time. On the battlefield itself, machines can move faster and with greater precision and coordination than soldiers, AI proponents say. However, others argue that American commanders would never accept fully autonomous systems, “because it would mean surrendering the intelligence and experience of highly trained officers to machines” (New York Times, June 28, 2019).

Scharre of the Center for a New American Security remembered one of his military assignments on the Afghanistan/Pakistan border, observing a young girl supposedly watching goats. GIs discovered she was on a cell phone, apparently informing the enemy of their position. Rules of war would have labeled her an enemy, authorized to be shot as a combatant. If an artificially intelligent robot would have been in his place, the outcome could have been a tragedy. Had a robot been programmed to comply with the rules of war, the girl would have been fired at. Scharre said that this situation raised a question: “How would you design a robot to know the difference between what is legal and what is right?”

“STOP KILLER ROBOTS”

For the expert there is “an important asymmetry between humans and machines in the rules of war, which is that humans are legal agents and the machines are not.” An autonomous weapon is no more a legal agent than a M16 rifle. For some nations there must be meaningful human control of all weapons. The US Defense Department asks for “appropriate human judgment.” Only 30 nations out of the almost 200 United Nations members have supported an international ban on so-called self-directed lethal weapons (LAW). Supporters include Morocco, Uganda, Algeria, Djibouti, Ghana, Iraq, Egypt, and Palestine. UN Secretary General Antonio Guterres, publicly declared that “machines with the power and discretion to take lives without human involvement is politically unacceptable, morally repugnant, and should be prohibited by international law.” More than 100 nongovernmental organizations agree. Their opposition is coordinated by the Campaign To Stop Killer Robots, but their actions are consistently stymied by a minority of military powers—Israel, South Korea, Russia, the United States, Britain, and France—which are developing, for example, experimental autonomous stealth combat drones to operate in an enemy’ s heavily defended air space. As the ability of systems to act autonomously increases, those who study the dangers of those weapons fear that military planners might be tempted to eliminate human controls altogether.

Five years ago Apple co-founder Steve Wozniak, Elon Musk, Stephen Hawking, and more than 1000 robotics and AI researchers, signed an open letter warning that “autonomous weapons will become the Kalashnikovs of the future, ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a political ethnic group.” But such warnings seem destined to go unheeded. They compete “with the growing acceptance of this technology,” according to the New York Times. Another attempt to gather forces to combat killer robots, scheduled for April 2020 at the University of Ottawa, Canada, was canceled. The WeRobot conference on law and policy relating to robotics, which was supposed to alert the world about lethal machines, was sabotaged by the coronavirus.

The opinions expressed in this article belong to the author.

Comments Policy

Comments are welcomed and encouraged, but there are some instances where comments will be edited or deleted as follows. View our disclaimer and comment policy :
- Opinions expressed in the comments are those of the contributors alone;
- Comments deemed to be spam or questionable spam will be deleted. Including a link to relevant content is permitted, but comments should be relevant to the post topic;
- Comments including profanity will be deleted;
- Comments containing language or concepts that could be deemed offensive will be deleted;
- Comments that attack a person individually will be deleted.
This comment policy is subject to change at anytime.