Using AI to fight COVID-19 may harm disadvantaged groups, experts say
[ad_1]
Read More/Less
The university’s researchers also highlighted discrimination in AI technology as they pick symptom profiles from medical records, reflecting and exacerbating biases against minorities
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
Companies worldwide have devised methods in the past year to harness the power of big data and machine learning (ML) in medicine. A model developed by Massachusetts Institute of Technology (MIT) uses AI to detect asymptomatic COVID-19 patients through coughs recorded on their smartphones. In South Korea, a company used cloud computing to scan chest X-rays to monitor infected patients.
Artificial intelligence (AI) and ML have been extensively deployed during the pandemic, and their use ranged from data extraction to vaccine distribution. But experts from the University of Cambridge raise questions on ethical use of AI as they see the technology to have a tendency to harm minorities and those from lower socio-economic status.
“Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic,” said Stephen Cave, Director of Cambridge’s Center for the Future of Intelligence (CFI).
Also Read | Competition between prediction algorithms is bad for customers, study finds
Making clinical choices like predicting deterioration rates of patients who may need ventilation can be flawed as the AI model uses biased data. These trained datasets and algorithms are inevitably skewed against groups that access health services infrequently, including minority ethnic communities and those belonging to lower social status, Cambridge team warned.
Another issue is in the way algorithms are used to allocate vaccines locally, nationally and globally. Last December, Stanford Medical Centre’s vaccination plan algorithm left out several young front-line workers.
“In many cases, AI plays a central role in determining who is best placed to survive the pandemic. In a health crisis of this magnitude, the stakes for fairness and equity are extremely high,” said Alexa Hagerty, research associate at University of Cambridge.
Also Read | How bias crept into AI-powered technologies
The university’s researchers also highlighted discrimination in AI technology as they pick symptom profiles from medical records, reflecting and exacerbating biases against minorities.
The use of contact-tracing apps has also been criticised by several experts around the world, stating that it excludes those who don’t have access to the internet and those who lack digital skills, among other user privacy issues.
In India, biometric identity programmes can be linked to vaccination distribution, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI. These private algorithms are like ‘black box’, Hagerty noted.
[ad_2]