‘Machines set loose to slaughter’: the dangerous rise of military AI | News

The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defences are weak or nonexistent.

Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”. One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.

A still from Slaughterbots.



A still from Slaughterbots. Photograph: Future of Life Institute/YouTube

Who knows how many other noxious creatures are now models for avant garde military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines. When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future military investments.

In the past, nation states have come together to prohibit particularly gruesome or terrifying new weapons. By the mid-20th century, international conventions banned biological and chemical weapons. The community of nations has forbidden the use of blinding-laser technology, too. A robust network of NGOs has successfully urged the UN to convene member states to agree to a similar ban on killer robots and other weapons that can act on their own, without direct human control, to destroy a target (also known as lethal autonomous weapon systems, or Laws). And while there has been debate about the definition of such technology, we can all imagine some particularly terrifying kinds of weapons that all states should agree never to make or deploy. A drone that gradually heated enemy soldiers to death would violate international conventions against torture; sonic weapons designed to wreck an enemy’s hearing or balance should merit similar treatment. A country that designed and used such weapons should be exiled from the international community.

In the abstract, we can probably agree that ostracism – and more severe punishment – is also merited for the designers and users of killer robots. The very idea of a machine set loose to slaughter is chilling. And yet some of the world’s largest militaries seem to be creeping toward developing such weapons, by pursuing a logic of deterrence: they fear being crushed by rivals’ AI if they can’t unleash an equally potent force. The key to solving such an intractable arms race may lie less in global treaties than in a cautionary rethinking of what martial AI may be used for. As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.


Are killer robots as horrific as biological weapons? Not necessarily, argue some establishment military theorists and computer scientists. According to Michael Schmitt of the US Naval War College, military robots could police the skies to ensure that a slaughter like Saddam Hussein’s killing of Kurds and Marsh Arabs could not happen again. Ronald Arkin of the Georgia Institute of Technology believes that autonomous weapon systems may “reduce man’s inhumanity to man through technology”, since a robot will not be subject to all-too-human fits of anger, sadism or cruelty. He has proposed taking humans out of the loop of decisions about targeting, while coding ethical constraints into robots. Arkin has also developed target classification to protect sites such as hospitals and schools.

In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence done in the heat of passion, but for the premeditated murderer who coolly planned his attack. The history of warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system is likely to be designed with some kind of override feature, which would be controlled by human operators, subject to all the normal human passions and irrationality.

A French soldier launches a drone in northern Burkina Faso.



A French soldier launches a drone in northern Burkina Faso. Photograph: Michele Cattani/AFP/Getty

Any attempt to code law and ethics into killer robots raises enormous practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to programme a robot warrior with reactions to the infinite array of situations that could arise in the heat of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors, an autonomous weapon system in the fog of war is dangerous.

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where there is a massive dataset with clearly understood examples of good and bad, right and wrong. For example, credit card companies have improved fraud detection mechanisms with constant analyses of hundreds of millions of transactions, where false negatives and false positives are easily labelled with nearly 100% accuracy. Would it be possible to “datafy” the experiences of soldiers in Iraq, deciding whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)?

Given these difficulties, it is hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to support dangerous fantasies of pushbutton wars and guiltless slaughters.


International humanitarian law, which governs armed conflict, poses even more challenges to developers of autonomous weapons. A key ethical principle of warfare has been one of discrimination: requiring attackers to distinguish between combatants and civilians. But guerrilla or insurgent warfare has become increasingly common in recent decades, and combatants in such situations rarely wear uniforms, making it harder to distinguish them from civilians. Given the difficulties human soldiers face in this regard, it’s easy to see the even greater risk posed by robotic weapons systems.

Proponents of such weapons insist that the machines’ powers of discrimination are only improving. Even if this is so, it is a massive leap in logic to assume that commanders will use these technological advances to develop just principles of discrimination in the din and confusion of war. As the French thinker Grégoire Chamayou has written, the category of “combatant” (a legitimate target) has already tended to “be diluted in such a way as to extend to any form of membership of, collaboration with, or presumed sympathy for some militant organization”.

The principle of distinguishing between combatants and civilians is only one of many international laws governing warfare. There is also the rule that military operations must be “proportional” – a balance must be struck between potential harm to civilians and the military advantage that might result from the action. The US air force has described the question of proportionality as “an inherently subjective determination that will be resolved on a case by case basis”. No matter how well technology monitors, detects and neutralises threats, there is no evidence that it can engage in the type of subtle and flexible reasoning essential to the application of even slightly ambiguous laws or norms.

Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

Even if we were to assume that technological advances could reduce the use of lethal force in warfare, would that always be a good thing? Surveying the growing influence of human rights principles on conflict, the historian Samuel Moyn observes a paradox: warfare has become at once “more humane and harder to end”. For invaders, robots spare politicians the worry of casualties stoking opposition at home. An iron fist in the velvet glove of advanced technology, drones can mete out just enough surveillance to pacify the occupied, while avoiding the kind of devastating bloodshed that would provoke a revolution or international intervention.

In this robotised vision of “humane domination”, war would look more and more like an extraterritorial police action. Enemies would be replaced with suspect persons subject to mechanised detention instead of lethal force. However lifesaving it may be, Moyn suggests, the massive power differential at the heart of technologised occupations is not a proper foundation for a legitimate international order.

Chamayou is also sceptical. In his insightful book Drone Theory, he reminds readers of the slaughter of 10,000 Sudanese in 1898 by an Anglo-Egyptian force armed with machine guns, which itself only suffered 48 casualties. Chamayou brands the drone “the weapon of amnesiac postcolonial violence”. He also casts doubt on whether advances in robotics would actually result in the kind of precision that fans of killer robots promise. Civilians are routinely killed by military drones piloted by humans. Removing that possibility may involve an equally grim future in which computing systems conduct such intense surveillance on subject populations that they can assess the threat posed by each person within it (and liquidate or spare them accordingly).

Drone advocates say the weapon is key to a more discriminating and humane warfare. But for Chamayou, “by ruling out the possibility of combat, the drone destroys the very possibility of any clear differentiation between combatants and noncombatants”. Chamayou’s claim may seem like hyperbole, but consider the situation on the ground in Yemen or Pakistani hinterlands: Is there really any serious resistance that the “militants” can sustain against a stream of hundreds or thousands of unmanned aerial vehicles patrolling their skies? Such a controlled environment amounts to a disturbing fusion of war and policing, stripped of the restrictions and safeguards that have been established to at least try to make these fields accountable.


How should global leaders respond to the prospect of these dangerous new weapons technologies? One option is to try to come together to ban outright certain methods of killing. To understand whether or not such international arms control agreements could work, it is worth looking at the past. The antipersonnel landmine, designed to kill or maim anyone who stepped on or near it, was an early automated weapon. It terrified combatants in the first world war. Cheap and easy to distribute, mines continued to be used in smaller conflicts around the globe. By 1994, soldiers had laid 100m landmines in 62 countries.

The mines continued to devastate and intimidate populations for years after hostilities ceased. Mine casualties commonly lost at least one leg, sometimes two, and suffered collateral lacerations, infections and trauma. In 1994, 1 in 236 Cambodians had lost at least one limb from mine detonations.

US soldiers alongside a landmine detection robot in Afghanistan.



US soldiers alongside a landmine detection robot in Afghanistan. Photograph: Wally Santana/AP

By the mid-90s, there was growing international consensus that landmines should be prohibited. The International Campaign to Ban Landmines pressured governments around the world to condemn them. The landmine is not nearly as deadly as many other arms but unlike other applications of force, it could maim and kill noncombatants long after a battle was over. By 1997, when the campaign to ban landmines won a Nobel peace prize, dozens of countries signed on to an international treaty, with binding force, pledging not to manufacture, stockpile or deploy such mines.

The US demurred, and to this day it has not signed the anti-landmine weapons convention. At the time of negotiations, US and UK negotiators insisted that the real solution to the landmine problem was to assure that future mines would all automatically shut off after some fixed period of time, or had some remote control capabilities. That would mean a device could be switched off remotely once hostilities ceased. It could, of course, be switched back on again, too.

The US’s technological solutionism found few supporters. By 1998, dozens of countries had signed on to the mine ban treaty. More countries joined each year from 1998 to 2010, including major powers such as China. While the Obama administration took some important steps toward limiting mines, Trump’s secretary of defense has reversed them. This about-face is just one facet of a bellicose nationalism that is likely to accelerate the automation of warfare.


Instead of bans on killer robots, the US military establishment prefers regulation. Concerns about malfunctions, glitches or other unintended consequences from automated weaponry have given rise to a measured discourse of reform around military robotics. For example, the New America Foundation’s PW Singer would allow a robot to make “autonomous use only of non-lethal weapons”. So an autonomous drone could patrol a desert and, say, stun a combatant or wrap him up in a net, but the “kill decision” would be left to humans alone. Under this rule, even if the combatant tried to destroy the drone, the drone could not destroy him.

Such rules would help transition war to peacekeeping, and finally to a form of policing. Time between capture and kill decisions might enable the due process necessary to assess guilt and set a punishment. Singer also emphasises the importance of accountability, arguing that “if a programmer gets an entire village blown up by mistake, he should be criminally prosecuted”.

Whereas some military theorists want to code robots with algorithmic ethics, Singer wisely builds on our centuries-long experience with regulating persons. To ensure accountability for the deployment of “war algorithms”, militaries would need to ensure that robots and algorithmic agents are traceable to and identified with their creators. In the domestic context, scholars have proposed a “license plate for drones”, to link any reckless or negligent actions to the drone’s owner or controller. It makes sense that a similar rule – something like “A robot must always indicate the identity of its creator, controller, or owner” – should serve as a fundamental rule of warfare, and its violation punishable by severe sanctions.

Yet how likely is it, really, that programmers of killer robots would actually be punished? In 2015, the US military bombed a hospital in Afghanistan, killing 22 people. Even as the bombing was occurring, staff at the hospital frantically called their contacts in the US military to beg it to stop. Human beings have been directly responsible for drone attacks on hospitals, schools, wedding parties and other inappropriate targets, without commensurate consequences. The “fog of war” excuses all manner of negligence. It does not seem likely that domestic or international legal systems will impose more responsibility on programmers who cause similar carnage.


Weaponry has always been big business, and an AI arms race promises profits to the tech-savvy and politically well-connected. Counselling against arms races may seem utterly unrealistic. After all, nations are pouring massive resources into military applications of AI, and many citizens don’t know or don’t care. Yet that quiescent attitude may change over time, as the domestic use of AI surveillance ratchets up, and that technology is increasingly identified with shadowy apparatuses of control, rather than democratically accountable local powers.

Military and surveillance AI is not used only, or even primarily, on foreign enemies. It has been repurposed to identify and fight enemies within. While nothing like the September 11 attacks have emerged over almost two decades in the US, homeland security forces have quietly turned antiterror tools against criminals, insurance frauds and even protesters. In China, the government has hyped the threat of “Muslim terrorism” to round up a sizeable percentage of its Uighurs into reeducation camps and to intimidate others with constant phone inspections and risk profiling. No one should be surprised if some Chinese equipment powers a US domestic intelligence apparatus, while massive US tech firms get co-opted by the Chinese government into parallel surveillance projects.

A police robot enforcing coronavirus rules in Shenzhen, China in March 2020.



A police robot enforcing coronavirus rules in Shenzhen, China in March. Photograph: Alex Plavevski/EPA

The advance of AI use in the military, police, prisons and security services is less a rivalry among great powers than a lucrative global project by corporate and government elites to maintain control over restive populations at home and abroad. Once deployed in distant battles and occupations, military methods tend to find a way back to the home front. They are first deployed against unpopular or relatively powerless minorities, and then spread to other groups. US Department of Homeland Security officials have gifted local police departments with tanks and armour. Sheriffs will be even more enthusiastic for AI-driven targeting and threat assessment. But it is important to remember that there are many ways to solve social problems. Not all require constant surveillance coupled with the mechanised threat of force.

Indeed, these may be the least effective way of ensuring security, either nationally or internationally. Drones have enabled the US to maintain a presence in various occupied zones for far longer than an army would have persisted. The constant presence of a robotic watchman, capable of alerting soldiers to any threatening behaviour, is a form of oppression. American defence forces may insist that threats from parts of Iraq and Pakistan are menacing enough to justify constant watchfulness, but they ignore the ways such authoritarian actions can provoke the very anger it is meant to quell.

At present, the military-industrial complex is speeding us toward the development of drone swarms that operate independently of humans, ostensibly because only machines will be fast enough to anticipate the enemy’s counter-strategies. This is a self-fulfilling prophecy, tending to spur an enemy’s development of the very technology that supposedly justifies militarisation of algorithms. To break out of this self-destructive loop, we need to question the entire reformist discourse of imparting ethics to military robots. Rather than marginal improvements of a path to competition in war-fighting ability, we need a different path – to cooperation and peace, however fragile and difficult its achievement may be.

In her book How Everything Became War and the Military Became Everything, former Pentagon official Rosa Brooks describes a growing realisation among American defence experts that development, governance and humanitarian aid are just as important to security as the projection of force, if not more so. A world with more real resources has less reason to pursue zero-sum wars. It will also be better equipped to fight natural enemies, such as novel coronaviruses. Had the US invested a fraction of its military spending in public health capacities, it almost certainly would have avoided tens of thousands of deaths in 2020.

For this more expansive and humane mindset to prevail, its advocates must win a battle of ideas in their own countries about the proper role of government and the paradoxes of security. They must shift political aims away from domination abroad and toward meeting human needs at home. Observing the growth of the US national security state – what he deems the “predator empire” – the author Ian GR Shaw asks: “Do we not see the ascent of control over compassion, security over support, capital over care, and war over welfare?” Stopping that ascent should be the primary goal of contemporary AI and robotics policy.

Adapted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, which will be published by Harvard University Press on 27 October

Follow the Long Read on Twitter at @gdnlongread, and sign up to the long read weekly email here.