It’s a constant theme of defense technology coverage, including this column: autonomy will fundamentally change the dynamics of warfare. Smaller, faster computers, and the ability to split sensing and processing between different nodes, open all kinds of novel possibilities for weapons and warfare. And parallel technologies like more efficient batteries and smaller, more effective sensors make it possible to endow ever smaller and cheaper systems with the ability to navigate the world and act within it.
And yet, true autonomy remains an extremely hard problem. Machines are much better than humans at certain subsets of tasks: storing large amounts of information, for instance, or accessing specific parts of those memories on command and pattern-matching. But the nimbleness of human minds and their facility with problem-solving have yet to be duplicated in artificial form. And that doesn’t account for the security elements; an autonomous system is inherently no safer from infiltration or sabotage than our notably insecure existing computer systems.
Self-driving cars offer a taste of the difficulties involved. For the past two decades, they have been hailed as the next big thing in urban design, personal mobility, automotive safety, and even the fight against climate change. Major tech and automotive companies have raised and spent billions of dollars on developing, testing, and refining them. And yet, in the real world, autonomous vehicles have suffered one setback after another to the point where companies which have staked their strategies – and huge amounts of financial capital – on their near-term viability are beginning to reconsider.
It might be argued that making autonomous vehicles that operate in civilian contexts with a lower accident rate than their human-operated counterparts is a fundamentally different task compared to the security or military operations that might be entrusted to robots. But military applications are likely to involve either repetitive tasks in similarly complex environments – say, supplying deliveries – which would face many of the same challenges, or unpredictable, high-speed tasks in combat that would require a high degree of adaptability from the machine. In either case, there is no reason why a higher failure rate would be more acceptable to security decision-makers than civilian regulators.
So, what happens to the future of military technology if our projections of computerized autonomy more broadly speaking turn out to be overly optimistic? What happens if the doors that seem to be opening right now lead to dead ends, or long, winding passageways with no obvious destinations?
For one thing, it would force militaries and defense establishments to fundamentally reconsider their long-lead-time procurement strategies. Air forces that are being reconfigured to address the growing expense of tactical aircraft by reinforcing their numbers with cheaper, expendable drones might need instead to find ways to procure effective but cheaper crewed alternatives if autonomous tech isn’t equal to the task of managing air combat. And navies planning to turn over the task of hunting enemy submarines to extremely long-range autonomous ships or submarines might have to reinvest in the old concept of simple, durable ships designed for long-duration, low-speed patrols if it turns out that autonomous systems are inadequate at that task.
Autonomy has also been held out as a means of developing weapons systems which can exceed the limits imposed by the fragility of the human body. Fighter jets, for example, have for decades been capable of maneuvers so intense that they run the risk of knocking out their pilots, a consideration which would not apply if the pilot were removed from the equation. But the big bet behind the F-35 – which is famously less maneuverable than some of the planes it’s designed to replace – is that stealth, sensor fusion, and better weapons would allow it to fight in a more effective way that didn’t necessitate close-quarters maneuverability.
Ironically, if military autonomous technology fizzles, it might prove to be a rare moment of convergence between those who seek a ban on autonomous weapons and those in the defense-industrial and military futurist communities who have been pushing for their adoption. Systems which are ineffective are likely to be both ineffective and incapable of obeying the laws of war, a combination which would render them – for very different reasons – susceptible to being either banned outright or widely shunned for a combination of normative and operational reasons.
Of course, the future of any given technology is rarely as simple as “it doesn’t work.” Autonomous systems are already capable of some tasks and are highly likely to add additional relevant skill-sets as time goes by. But by the same token, technology rarely progresses in a straight line, and it behooves military theorists to think seriously about the ways technologies could fail as well as succeed before investing too much in them.