Technology is very much part of what we do. It is an integrated part of how we think today and not something that one can push away. The question is: how do we make sure that we manage it to serve our purposes from a human rights point of view?
The question of human control over technology is becoming much more visible, and by saying that I am not saying we should fear technology; I do not think one should reject it. Technology is very much an integrated part of how we think today and not something that one can push away. The question is: how do we make sure that it serves the underlying values of human rights, the emphasis on human dignity?
One of the issues raised is that of autonomous weapons. They have sensors, they can see the ground, they can see who is there. They have a processor, a computer, and then an effector, which is a missile. Typically, and mostly, these are airplanes, but they can also be land vehicles, they can be on water, and they can also be underwater. With drones, you have somebody taking the critical decision, the release of force; there is somebody on the ground, who presses the button, and decides that lethal force will be displaced. With autonomous weapons, there is an onboard computer that takes the decision. There is no one on board, no one on the ground, pressing the button who says this hellfire missile will now be dismissed. This of course raises immense, immensely interesting, and challenging questions, from the point of view of humanitarian law, from human rights law, from ethics, from politics, from all those sorts of perspectives.
The most extreme questions about the interface between human rights and technology come forth because, in a way, what is at stake here is handing over the power of life and death over human beings to technology. The very same questions that come to the fore in other areas are actually in a much more dramatic way on the table when we deal with autonomous weapons.
Surveillance and human biases
Another area in which technology comes up concerning human rights is surveillance, for example, facial recognition. One interesting and relevant question regarding surveillance and human biases is: to what extent will they be reflected in the artificial intelligence and the algorithms that are developed? For example, facial recognition in a crowd; that has to be programmed and it is humans who programme it. The biases of the humans go into the programme, but also the limitations that facial recognition has—for example, in terms of being able to make distinctions between people with lighter skins and darker skins. And that there is a clear difference in outcome, which means that they might not be accurate, as far as people with darker skins are concerned, and that may be due to injustice.
So that is in the context of law enforcement, but in the context of armed conflict, the very same issue comes in—whether the biases will be programmed into the artificial intelligence as well. Of course, the unpredictable part is that if there is machine learning, it starts with what was programmed into it, but also its own experiences, and where would it go to? So, there is one possibility, one of the potential dangers is indeed biases about even the recognition of a particular person as an enemy combatant, maybe programmed into the artificial intelligence itself. The question is also that, in some cases, this artificial intelligence may just not be able to make the distinctions.
From the point of view of the right to life, can artificial intelligence take better decisions during armed conflict? Also, during law enforcement—for example, in a demonstration—can computers make better decisions?
So that’s the substantive part of the right to life or the right against cruel, degrading treatment or punishment. That is the substantive question. And many people arguing about that say computers may not be able to distinguish between a man with a beard carrying a gun who is hunting or who is involved as an enemy combatant. They may not be able to make these distinctions in many cases, but it is clear that in many cases, they probably will be able to make faster, quicker decisions. But there is also the question then, if they still make mistakes, who is going to be held accountable? So normally, with human beings, there is at least the potential that the human being can be held accountable. With autonomous weapons, if a computer makes the decision, what do you do? And if we as human beings make mistakes—I do not know—maybe 40 per cent of the time and computers make mistakes in targeting 10 per cent of the time, for those 10 per cent what do you do? Or do we give up on accountability (which is an essential part of the right to life)?
However, I have to say at the same time, and again, there is complexity, Human beings, if they eventually take the decisions, and without the assistance of artificial intelligence, then of course, then biases are often even more unfiltered. In the context of armed conflict, for example, in many cases, decisions are taken based on bias. Biases are based on hatred, based on issues of revenge, based on an exaggerated masculinity. And all kinds of human frailties and weaknesses can find a way now that at least can be filtered out if it is a computer that takes the decision. And that is one of the advantages that have been promoted—why artificial intelligence should be brought into targeting—is that human beings get tired and then the underlying biases come out, that you do not have that with computers. It is a difficult thing to measure. But the point I want to emphasise really, is that there are advantages to using technology. And there can be disadvantages as well because we think this is now free of human biases, but nothing needs to be seen as completely free of human biases.
The idea of a free-ranging robot that roams the earth and decides who to kill, that is not accepted at all. You see that in James Bond movies and so on. In real life, that is not what is being proposed. It is much more contained within a particular area, and within particular time, but even within that, it remains important to say that ultimately there must be meaningful human control. And that, as every lawyer will know, is the big question: what exactly what is the definition of meaningful human control?
But I personally am convinced that one needs technology up to a certain level. And then at some point, if you lose meaningful human control, then you are into full autonomy and that is not an area you can go to in the first place because of the lack of accountability for that.
Note: This post has been prepared by Pranjali Kanel and Thérèse Murphy from a recording of a lecture given by Professor Christof Heyns (1959–2021) during a session on ‘Human Rights in Light of Technology’ at the 2020 International Confluence of Academicians, jointly organised by asiablogs and Kathmandu School of Law. Professor Heyns left a deep imprint on the field of human rights, including via his tenure as United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions from 2010 to 2016, during which he took the initiative to explore issues such as the use of drones and autonomous weapons in armed conflict or counter-terrorism operations.