Author: Layal Alghoozi | University of Glasgow
Lethal autonomous weapons systems have been the center of discussion at the Convention on Certain Conventional Weapons (CCW) meetings under the auspices of the UN since 2014. The meetings have been unproductive, painfully slow, and piecemeal at best. While there is divergence of state practice and opinion on whether autonomous weapons should be deployed with some states pushing for a pre-emptive ban while others arguing it is too soon for a ban, there seems to be widespread agreement on maintaining ‘meaningful human control’ over the targeting and selection decision in the use of force, which entails a human not only overseeing processes but relying on cognitive understanding in killing decisions. Although the specific parameters of ‘meaningful human control’ itself is contested, it is used as a baseline for autonomous weapons. This article examines the state practice of Pakistan, India, Lebanon, Jordan, and Kuwait, tying together the emerging consensus towards maintaining meaningful human control.
Pakistan is in favour of a pre-emptive ban on autonomous weapons systems (AWS) and was the first state to declare the need for a prohibition. Pakistan believes that AWS, since they delegate life and death decisions to machines, are inherently immoral and conflict with international humanitarian law and human rights law. This is because of their lack of human emotions like compassion means they cannot respect human life or dignity, nor can they interpret human behaviour to avoid wrongful targeting. For example, a child playing with a toy gun may be misinterpreted as a legitimate military target depending on the AWS sensor capabilities.
Despite its reluctance to legitimize AWS, India is developing AWS in line with its own security interests for potential use in the Line of Control bordering China. The technological gap between economically advanced states and less-developed countries presents a threat when it comes to the development of autonomous weapons. India worries that this technological gap will further incentivize an asymmetric war where belligerents may feel compelled to respond to militarily superior states through unlawful conduct. Moreover, India is concerned with whether such weapons have the capacity to distinguish combatants from protected persons and civilians as required by the principle of distinction under international humanitarian law. It, therefore, is adamant on maintaining ‘meaningful human control’ where a human operator is available during the deployment, activation, target selection and use of force.
Although Lebanon has not explicitly elaborated its opinion on the development of AWS, it has certainly expressed the need to uphold both human rights and humanitarian law as ‘guiding principles’ to further the debate. According to the ICRC, humanitarian law would be void if new weapons are deployed without human control or the absence of a ‘human in the loop’ over killing decisions. This is because the fundamental rules of humanitarian law, namely distinction, proportionality, and precaution require human judgment and contextual assessments that cannot simply be delegated to machines. For example, the principle of proportionality requirescommon sense and good faith. Further, proportionality considers not only the immediate effects but the long-term impacts of attacks on civilians and in respect to the wider political sphere. Moreover, AWS may have trouble selecting similarly advantageous military targets and choose to use force on the one providing the least danger to civilians, or making collateral damage estimates as required by proportionality. Likewise, the US Code of Federal Regulations guideline describes ‘gut feeling’ as an element of judgment required for proportionality which cannot be incorporated into algorithms.
Furthermore, the principle of distinction which requires that “attacks must not be directed against civilians” is hardly reconcilable with AWS as these weapons are unlikely to distinguish between legitimate targets and protected persons. While this assumption ultimately rests on its sensor capabilities, the environment in which it is deployed, and the level of verification required before deployment, AWS are nevertheless expected to be contrary to humanitarian law in the absence of meaningful human control.
Jordan is amongst the few Arab states that have expressed its opinion on AWS and the importance of retaining human control over force and has called for an unequivocal pre-emptive ban on AWS. It warns that developing these weapons will initiate an arms race with states rushing to attain the most technologically advanced weaponry with little regard to its lawfulness. Indeed, an arms race is imminent as militant states like the US, UK, Russia, and China are advancing their AI and AWS capacities rapidly in anticipation of other states threatening to do the same.
Kuwait has expressed that the arbitrary nature of AWS without any clear guidelines on permissible uses risks undermining peace and security and warns against its destructive impact. Although it has not explicitly referred to a ban, its statement makes clear its reluctance to accept AWS. Indeed, the Group of Governmental Experts to the CCW (which Kuwait is part of) has signed the 11 guiding principles to further the debate on AWS, one of which is to retain human-machine interaction.
Ultimately, there remains work to be done at the CCW. The consensus-nature of the CCW and divided state opinion has naturally made it hard to translate debate into concrete state obligations. If debates continue being stalled by futile tasks and procedural obstacles, AWS threaten to undermine the rules of armed conflict and move towards distanced killing with unclear accountability measures, implicating the rule of law. The emerging trend towards maintaining meaningful human control does, however, point to some agreement and progress. Indeed, notwithstanding the debilitating impact Covid-19 has had on meetings, states are currently writing working papers for the next Review Conference in 2021.