The success of pandemic management is measured by the efficiency of our response, as well as our preparedness for the day that will come after. Amid the fight, while ensuring that any adverse effects are mitigated, we must not forget the necessity of being able to return to normal, nor compromise our readiness to define a new normal. We advocate that it is exactly now that we must learn from the current events and prepare for the future, before the risk of complacency sets in.
A Balanced, not Dichotomous Approach
An important lesson learnt from the current crisis, and in preparedness for the next, is that there is a pragmatic necessity to construct policies that facilitate balanced operation in a contextualised environment. There is need to accommodate certain surveillance capabilities while concurrently retaining citizen empowerment.
For example, if the ministries of interior have the power for surveillance to prevent murder, why shouldn’t health ministries have a similar power of surveillance to prevent death from viruses? These powers when granted should come with clear processes to prevent potential abuse. A balance needs to be achieved considering the altering trade-offs of privacy, data sharing, intellectual property, safety, transparency, or trust, to name a few.
Policies that enable Global Data Coordination and Models Sharing
The COVID-19 pandemic has highlighted a major need for globally coordinated health data management that can enable fast, scalable, and digital approaches. Such approaches based on ex-ante data-driven prediction models could complement those based on ex-post medical testing. In an ideal world of standardised and globally shared COVID-19 related models and data, many countries could have identified the most vulnerable, say 1%, of their population at a much earlier stage using global prediction models.
This would have enabled governments to better protect this 1% with personalised policies that would deviate from standard treatment or approaches for the majority. Studies indicate that personalised prediction-based policies show promise while offering better privacy than, for example, test and trace policies.
In the case of pandemics, given that human biology is largely identical across nations, there is no need for individual nations to repeat local data exercises. Rather, common global health data standards could enable sharing of trained models based on data from the epicentre nation(s), which would then be used and refined by all nations. Machine learning scientists are also coming up with new technical ways to do so: federated learning, transfer learning, and the like.
Policies that enable such technologies are currently lacking and need to be urgently developed while considering trade-offs in privacy, intellectual property and competition.
Flexible and Context-Specific Data Policies
The European Commission, in a February 2020 white paper on AI, differentiates between low and high-risk technologies – such as, say, certain online marketing versus, say, certain healthcare-related ones. Policy contextualisation is a necessity that needs to be further developed.
For example, policies regarding data sharing or tracking of individuals may need to be different during times of WHO-declared “pandemic wars”, to facilitate fast and effective responses to save lives. Such flexibility needs to be properly defined in advance to avoid transitional risks, for example due to delays from debates during critical times.
An Opportunity not to be Missed
For all the misery the COVID-19 pandemic caused, it highlighted issues that otherwise might have been difficult to openly debate, and revealed lessons that may have otherwise taken years to learn – a silver lining not to be missed. The pre-COVID thinking and policies may need to be reviewed soon, before the next pandemic occurs.
Notions of surveillance and empowerment should not be regarded as incongruous, global data coordination can complement primary approaches of medical testing through personalised prediction-based policies. The time to move away from dichotomous thinking toward contextualisation is now.
This blog was coauthored by Theodoros Evgeniou, David R. Hardoon, and Anton Ovhinnikov.
Theodoros Evgeniou is a professor at INSEAD. He has been working on AI and Machine Learning for more than 25 years (at MIT before INSEAD, from where he also received 4 degrees) and is the author of some of the most cited articles in machine learning. He is currently teaching, presenting and consulting on data-driven and AI strategies for global organisations.
David R. Hardoon is a Senior Advisor for Data and Artificial Intelligence at UnionBank Philippines. He was formally the Monetary Authority of Singapore’s first appointed Chief Data Officer and Special Advisor (Artificial Intelligence). He has extensive machine learning and AI innovation experience in academia and industry alike.
Anton Ovchinnikov is Distinguished Professor and Scotiabank Scholar of Customer Analytics at Smith School of Business, Queen’s University, Canada, and a Visiting Professor at INSEAD. He is researching, teaching, and consulting on the topics of data-driven decision making in business, government, and non-profits.