Artificial Intelligence, Healthcare, and Human Rights During the COVID-19 Pandemic

Authors: Geeta Pathak Sangroula | Puja Silwal | Sanchit Singh

COVID-19 arrayed the prospects and challenges that come with digital technologies. Going forward the development of healthcare standards with such technologies calls for a mindful approach that recognises the digital divide between countries.

The COVID-19 pandemic exposed the healthcare sector to unfathomable burdens. One consequence of this has been increased interest in technologies—artificial intelligence (AI) and advanced robotics. The aim is for such technologies to help address inefficiencies in the sector in fields from diagnosis, drug and vaccine distribution to tele-healthcare and robot assisted surgery, as well as reducing the overall burden on frontline workers in particular. The current moment is thus a pivotal one in the establishment of new standards in healthcare both in dealing with pandemics and more generally.

For many countries—including India, Singapore and South Korea—technology has played a significant role in limiting transmission of the virus. It quickly became clear that developing efficient early detection systems would be crucial so as to limit the spread of the contagion, for this AI algorithms were able to prioritise faster appointments for suspected patients and ramp up testing with minimal manpower.

Countries like South Korea also developed data hubs using the precision of AI-based technology. One such tool was Risklick AI, a management platform to process and analyse large amounts of data relating to COVID-19 scientific evidence: it has been described as critical to both pharmacology and therapy development. Additionally, there were machine learning (ML) led prediction modelling tasks, including identifying vulnerable individuals at increased susceptibility to the virus, assessing the severity of initial symptoms, and forecasting the virus spread in regions across countries using Non-Linear Regressive Network to analyse time series. These developed into a tool of benefit to the general citizenry through contact-tracing mobile applications equipped with the ability to monitor risk and the spread of the virus on the basis of interactions. In particular, individuals were able to receive notifications if they had been in close proximity to infected persons.

This centrality of AI technologies in healthcare during the pandemic points to a key challenge as we move forward: namely, the significance of international cooperation and assistance with respect to AI-technology sharing, particularly as an international human rights obligation.

International cooperation and the technology gap

For several years now there has been a growing discourse on the need to strengthen international cooperation in AI. A 2021 Brookings Report, for example, discussing reasons to sustain and enhance international cooperation, emphasises its importance to research, innovation and standardisation. The report points out the need to reaffirm democracy, freedom of expression and human rights, and stresses the need to keep technologies away from techno-authoritarian regimes, however it fails to consider inequity in access to AI technologies among developing nations.

‘It is dangerously destabilising to have half the world on the cutting edge of technology while the other half struggles on the bare edge of survival.’

William J. Clinton, Former US President

Such exclusion from access to information and digital technology adversely impacts individual, political, social and economic capabilities of nations, especially in a critical sector such as healthcare. COVID-19 only amplified the disadvantageous position of developing countries, where those with great economic capacities were able to equip themselves with the most advanced technologies in the interest of their citizens, while the least developed nations suffered gravely, leaving them lower in the list of priority.

So how does the current international human rights regime approach this disparity?

The issue of the technological divide is not a new one; the present AI context is only an extension of what has been previously dealt with. The Universal Declaration of Human Rights (UDHR) and Article 19 of the International Covenant on Civil and Political Rights (ICCPR) clearly enunciate the freedom to enjoy access to information without discrimination. Expression as a right extends to ‘freedom to seek, receive, and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice’. From this, we see that the right is guaranteed irrespective of frontiers and even though AI and related technologies are a recent invention, they are not immune from the provisions of the aforementioned conventions.

Since AI is a collaborative domain, any roadmap to bridge the gap of access to AI and healthcare in developing countries will require the inclusivity of such countries in multilateral partnership and collaborative initiatives to equip smaller economies with means to imbibe lifesaving AI technologies into their healthcare sectors. For low- and middle-income countries, the primary issue is inadequate infrastructure to facilitate medical AI. In particular, low internet penetration and inaccessibility to electricity make integration efforts to improve digital health infrastructures a lot more difficult.

Overcoming this will require centralised efforts like the COVID-19 Vaccines Global Access Facility (COVAX), so as to ensure equal and proportionate access to medical technologies. COVAX came in response to the unequal access and distribution of vaccines. Major economies moved up the priority list because of substantial investments in research and development which COVAX sought to correct by focusing on countries that had limited access. The vaccine initiative has been consequential and exemplary in conducting critical operations whilst adhering to robust human rights due diligence as provided in the UN Guiding Principles on Business and Human Rights and the 2008 Human Rights Guidelines for Pharmaceutical Companies in Relation to Access to Medicines. This could be a desirable model for medical AI initiatives to rely on.            

The way forward

The goal provided in SDG3—‘universal health coverage (UHC) and access to quality health care. No one must be left behind’—can only be achieved by addressing the disparity in access to AI technologies. COVID-19, coupled with a maturing globalised economy, has helped leading economies to realise the implications of failure to bridge the technology gap. States are duty bound to facilitate international economic and social cooperation: as emphasised by Article 55 of the UN Charter, which also refers to ‘solutions of economic, social, health, and related problems’. Therefore, cognisance of the skewed capabilities requires immediate and concerted attention so as to adequately address the vulnerabilities that low- and middle-income countries are bound by in the interest of global community and international law.

Geeta Pathak Sangroula is a Professor of Human Rights and International Humanitarian Law at Kathmandu School of Law. Geeta is a Senior Advocate at the Supreme Court of Nepal. She is a Steering Committee member of the Asia-Pacific Master’s programme on Human Rights and Democratisation (APMA) at Kathmandu School of Law.
Puja Silwal is an Advocate and a Teaching Assistant at Kathmandu School of Law. She is working as Programme Secretary in the Global Campus Asia-Pacific Programme at Kathmandu School of Law.
Sanchit Singh is an editor at asia blogs. Sanchit is associated with an initiative that works in partnership with the European Parliament in promoting dialogue and combating extremism.

Leave a Reply

Your email address will not be published. Required fields are marked *