What’s AI’s role in the fightback against coronavirus?
Angela Eager, Research Director, Tech Market View
Angela Eager joined TechMarketView in June 2011 and is a Research Director for Tech Insights, focused on the Software market & Emerging Technologies. A respected industry analyst with over 20 years’ experience assessing the IT market, she is known for her in depth knowledge of the business enterprise applications sector, most notably the ERP and CRM segments, and as an early commentator on SaaS and cloud developments.
The coronavirus outbreak is at the front of everyone’s minds right now. We are all searching for answers and thinking about how we can fight back against something which poses so great a threat to our society, family and friends, and way of life. It’s been said that we haven’t seen a pandemic like this for over 100 years, since the global flu outbreak of 1918. The technological advances that have been made since then are immeasurable, so many are now asking how cutting-edge tech can work with us to face down the coronavirus pandemic.
Artificial intelligence would have been thought of as a pipe dream in 1918, but today it is helping us do everything from driving our cars to answering our phones. Could this clever technology help us in the fight against coronavirus too? – Or has the pandemic come too soon and spread too quickly for the nascent technology? And what are the ethical considerations in its application?
When the novel coronavirus (COVID-19) started to emerge from Wuhan City, China in late 2019, public health data monitoring companies were amongst the first to notice. These organisations interrogate large datasets using AI algorithms to automate infectious disease surveillance, helping medical experts recognise anomalies that may indicate emerging epidemics. But despite this early warning, the disease spread rapidly across the world.
Advancing public health approaches
In recent weeks we’ve seen examples of AI being used to track and conceptualise the spread of the disease, help us better understand the structure of the virus, improve diagnosis and identify potential remedies. Fighting the disease in the UK during the early stages of its spread has relied predominantly on established public health approaches, but in the coming weeks we can expect AI to play an increasingly important role.
Defeating the virus will be a long-term effort and it will accelerate adoption of digital technologies in the NHS, including AI. To date, we have seen NHS Digital make rapid changes to the clinical algorithms in the Pathways triage system and introduce COVID-19 routing on 111 Online and the NHS App. The next steps will require effective collaboration between government, academia, health systems and business: AI has the potential to assist with resource management and treatment prioritisation, help scientists interrogate the huge quantity of data being published every day, help identify effective treatments and almost certainly play a role in the development of a vaccine. But this journey requires us to consider issues of bias, privacy and ethics in a way we have never been required to before.
Opening up a data treasure chest
The UK Government is asking leading tech firms how they could help the NHS. Health information combined with details from mobile devices and networks, digital platforms and social media would create an ultra-rich data set that could be reasoned over with AI – and used for tasks beyond COVID-19, raising all manner of privacy considerations. If authorities opened this treasure chest would they close it post-pandemic? Indeed should they, if it could be utilised to mitigate future emergencies?
Algorithmic redlining – the unacceptable last mile?
Another ethical consideration is AI redlining. Redlining is the denial of services due to perceived high risk or discrimination. AI redlining is the result of incomplete data sets and poorly trained and understood algorithms, producing unfair output. With demand for healthcare resources expected to overwhelm supply, could we see algorithms with unconscious redlining determining allocation of resources based on who the models indicate is most likely to recover from the virus?
Algorithmic resource allocation might improve efficiencies but algorithm-driven life and death decisions could be the unacceptable last mile. They are able to reason over a volume and range of connected data points human clinicians cannot, but it’s all too easy for bias to creep in from poorly sourced or incomplete data, leading to inaccurate output. There is not currently enough COVID-19 data for unconscious AI redlining to be a factor but it is an important consideration for the future.
Everyone has a role in securing ethical AI
There are no easy answers to the ethical questions posed by exploring the use of AI in the fightback against coronavirus. What is clear is that AI does have a role to play and it falls on all of us to start to take on the ethical challenge we are presented with. Doing so will increasingly allow us to integrate this super smart technology into our battle against this pandemic in a way that delivers results and upholds the high standards of trust and accountability we expect from our government and health service.
Head of AI Lab, UK & Ireland