In countries like South Korea, Poland and Taiwan, AI has played a role in slowing down the spread of the coronavirus, but the success has often come at the cost of something healthcare stakeholders need to be cautious with moving forward: personal privacy.
That’s according to a recent commentary by Philip N. Howard and Lisa-Maria Neudert, both from Oxford University’s Oxford Internet Institute.
In their view, technical solutions such as contact tracing, symptom tracking and immunity certificates may very well have a significant role to play in reducing or controlling future pandemics, but “they must be implemented in ways that do not undermine human rights.”
For example, they point out, South Korea “has collected extensively and intrusively the personal data of its citizens, analyzing millions of data points from credit card transactions, CCTV footage and cellphone geolocation data. South Korea’s Ministry of the Interior and Safety even developed a smartphone app that shares with officials GPS data of self-quarantined individuals. If those in quarantine cross the “electronic fence” of their assigned area, the app alerts officials. The implications for privacy and security of such widespread surveillance are deeply concerning.”
Similarly, other countries have used location data from cellphones for various applications tasked with combating coronavirus. “Supercharged with artificial intelligence and machine learning, this data cannot only be used for social control and monitoring, but also to predict travel patterns, pinpoint future outbreak hot spots, model chains of infection or project immunity.”
Part of the problem, they note, is that AI is pretty much as new to public officials as it is to everyone else, so few of the longterm necessary safeguards have been put in place. “Put on the spot, regulators struggle to evaluate the legitimacy and wider-reaching implications of different AI systems for democratic values. In the absence of sufficient procurement guidelines and legal frameworks, governments are ill-prepared to make these decisions now, when they are most needed.”
Moreover, once an AI system is unleashed, it’s difficult to reel it back in if problems are subsequently identified. At the very least, they say, in the absence of better solutions, “all AI applications developed to tackle the public health crisis must end up as public applications, with the data, algorithms, inputs and outputs held for the public good by public health researchers and public science agencies.”
For the moment, the writers recognize the importance of perhaps playing fast and loose with regulatory structures in the name of getting the pandemic under control, “but when coronavirus is under control, we’ll want our personal privacy back and our rights reinstated. If governments and firms in democracies are going to tackle this problem and keep institutions strong, we all need to see how the apps work, the public health data needs to end up with medical researchers and we must be able to audit and disable tracking systems.”