| Welcome to Global Village Space

Thursday, February 15, 2024

Robotic kill: coming soon to an autonomous battlefield

Ali Naqvi discusses in detail how warfare technology has evolved as greater emphasis is now put on lethal autonomous weapon systems. While such weapons can increase efficiency, they can be highly unpredictable as well.

The history of war is as old as the history of human survival on planet earth. Every era brought new technologies and thus, maximized human expectations. In the past, predictions about future warfare put too much emphasis on developing new technologies.

The 19th century brought quick troops mobilization supported with firing artillery and focus on blitz attacks. However, these ideas were proved wrong during WWI and convinced the European powers to invest more in developing new technologies.

Read more: AI and national security; a new arms race?

Later on, in the 1930s, it was believed that aerial bombardment of cities would be devastating for the enemy to prompt immediate surrender, but it was once again proven wrong when Britain refused to give in against Luftwaffe. This strategy also failed to bring quick and decisive victory.

At the end of the 20th century, the Americans demonstrated newly developed technology in the first gulf war. The combination of its precision-guided munitions, surveillance, space-based communication, and stealth technology could have achieved what many people assumed that in the future, the war would always be swift and decisive.

Read more: Present and Future of the American F-35

However, in the aftermath of the 9/11 attacks and decades-long conflict once again proved difficult to rightly predict the war. It is right that warfare technology improved a lot, but one aspect remained the same: war’s unpredictability. Therefore, it is right to say that the more things change, the more they stay the same.

Warfare and 21st century

With the rapid spread of the internet and fast advancement in smart electronics, the dynamics of warfare are transforming on a fundamental level. Faster processing speeds and smaller dimensions make weapons easier to deploy and empirically more efficient now.

The leading military powers of the world are competing to develop robotic weapon systems that can operate autonomously. Although the advantages of such weapons are plenty, the anxieties are just as important to consider.

Read more: Can & Will Robots render Humans useless?

There is an arms race in the world that focuses on autonomous killer machines. These machines do not resemble terminator or other fictional humanoid robots, and their construction and moving them to the battlefield is yet not possible.

Instead, military thinkers are looking to develop lethal AI algorithms to support autonomy and install such coding into existing weapon systems.

The process needs to be cost-effective so the government would not replace entire fleets. Nearly every military drone on today’s battlefield is a prime example. The drones transmit video and data to human operators in control towers, who then analyze the target and determine whether to fire.

Read more: How to hide from drones in the age of rampant surveillance

So while aerial drones hovering over hostile territory can search and find targets independently, the decision to engage necessitates a human. Clearly, there is room for greater efficiency.

Ironically, the lethal autonomous weapon systems (LAWS) are the subsequent step. The LAWS are the type of military applications that removes the human in the loop allowing AI algorithms to decide whom to target and destroy independently.

So, in a nutshell, a package of codes would have an ultimate say over the lives of targeted human beings. Although it seems like the stuff of science fiction, it’s also an astonishing military technology, likely to be deployed in real battlefields in coming decades.

Read more: Amazon, Microsoft is making robots that could kill humans: study.

A weapon equipped with a lethal AI code could have images of people to identify and use deadly force to destroy the target. Similarly, based on its early experiences and data installed, it can predict enemy behavior and can carry out preemptive attacks. Suffice to say, lethal autonomous weapons would have near-endless configurations. Thus, making them more efficient on battlefields.

Pros and cons

The modern battlefield has evolved from swords, spears to guns, tanks, and now long-range missiles and aerial drone warfare. One of the main concerns throughout history has been to minimize human loss. The need to lessen the human cost is a cornerstone of every modern military doctrine.

Seen under this lens, LAWS are the next step in the evolution of warfare. They not only minimize the human cost but also reduce the chances of human error with increased efficiency.

Read more: EXPLAINER: What is hybrid warfare?

Similarly, war crimes – unless autonomous weapons are programmed to commit- there are fewer chances as compared to humans. With the right weapon and the right programming, these weapons can prove to be more effective with maximum accuracy. The military organizations that adopt autonomous robotic doctrine early on would gain a substantial military edge.

Once autonomous and robotic warfare goes mainstream, the drawbacks will even play out. The jamming, hacking, and electromagnetic pulses are some of the weaknesses of an autonomous robotic fleet. In order to counter such interceptions, a greater emphasis would need to be placed on developing cyber capabilities.

Read more: AI is here to stay. Now we need to ensure everyone benefits

Secondly, the application of new technologies is sometimes unpredictable. Fully autonomous weapons would make it easier and frankly cheaper to eliminate people as the technology improves, manufacturing will get easier, faster, and more cost-effective.

Autonomous weapons will become more readily available in the wrong hands that are a catastrophe waiting to happen. As with time, commercial drones are relatively inexpensive and easily accessible; the same principle applies to autonomous weapons. This will definitely host new strategic, technical, and also moral complications.

Read more: Arming against a new era of threats by cyber-warriors

International response

A growing number of legislators, policy and strategy makers, lobbying groups, IGOs, and NGOs are endorsing the call to ban fully autonomous weapon systems. Many stakeholders are lobbying the UN to consider a preemptive treaty to ban such lethal weapons.

The group highlight precedents such as the 1972 UN ban on the use of biological weapons and the 1992 ban on the use of chemical weapons. The same procedure is now being quoted to ban the lethal autonomous weapon systems.

Read more: The U.S-Russia blame game over chemical weapons in Syria

Although, details of some treaty is still being negotiated the lack of public awareness and debate has been slowing the process. The UN members have yet to reach a consensus on what makes for an autonomous system.

Additional problems at the UN forum stem from the opposition to a treaty banning LAWS by the US, UK, Israel, South Korea, and Australia. The opposing parties consider it too early for prohibition and that there are humanitarian advantages to explore.

While the argument for civilian application is valid, however, it would be impossible to keep LAWS exclusively in the domain of civilian life. As soon as these weapons develop, non-state actors will find ways to exploit and use them for nefarious purposes.

Read more: Chemical terrorist attack ‘huge concern’ says London fire chief

It is easier to ban things before anyone has them than to ban them when they are already in the mainstream, used by militaries across the world. It is only a matter of time before discussion on autonomous warfare ethics takes place; nations will employ fleets of autonomous robots.

As war is a necessary habit of humans that has always shifted according to condition, the introduction of autonomous weapons is, thus, an inescapable evolution of warfare. The question now is, will we again retreat to our old habits?

The author has graduated in International Relations from NUML, Islamabad. He works as a sub-editor at Pakistan Strategic Forum, an online defense portal. He tweets @smalinaqvi05. The views expressed in the article are the author’s own and do not necessarily reflect the editorial policy of Global Village Space.