By: David Geller, CPCU
If necessity is the mother of invention, then many artificial intelligence tools may very well be borne out of this COVID-19 crisis.
Technology Review has reported that, in order to address the onslaught of COVID-19 cases that are overwhelming various health-care facilities across the globe, nascent AI technologies are being deployed as possible tools to help bridge a gap in manpower.
One tool profiled in the article was developed by a radiologist at a hospital run by the United Kingdom’s National Health Service (NHS). Originally, this tool was developed in 2019 to address the lengthy period of time (about 6 hours) that it took for a specialist to examine x-rays. The hope was that it would provide for an effective initial reading, and then a specialist could subsequently follow up with a more in-depth reading afterwards.
A clinical trial began in September 2019 that would reportedly prove to be successful if the tool’s readings consistently matched the radiologists. After four months of review from various hospital and NHS forums, the system was approved to permanently double-check his trainees’ work for him.
However, per the article, the COVID-19 outbreak led to a change in those plans, but also an opportunity for more critical use of the tool. When preliminary research indicated that “the most severe covid cases displayed lung abnormalities associated with viral pneumonia”, chest x-rays emerged as a highly effective measure to triage patients.
Technology Review notes that it took mere weeks for a retooling that enabled the detection of COVID-19-induced pneumonia. And due to the dire circumstances posed by the virus, a new approval process that would have typically been required for this pivot was bypassed and the system was immediately approved by the medical director for use.
Other systems are reportedly being deployed to help manage this crisis as well, including:
- Genetic testing typically takes at least twelve hours for test results to come in—a lengthy period of uncertainty when determining if isolating a patient is necessary. But according to Technology Review, a tool being used in France, Italy, Mexico, and Portugal takes ten minutes to conduct this test and determine the probability of infection.
- In Israel, according to a different article by Technology Review, one of the country’s largest health organization is using AI to help identify a subset of the population that could be most prone to the worst effects of COVID-19. This assists Israel’s healthcare system in prioritizing and fast tracking the most at-risk individuals; the system has already identified 40,000 such individuals.
- An opinion piece from The Hill also reports that a New York-based company built an AI platform that cuts polymerase chain reaction (PCR) based testing, which normally takes 48 hours to complete, to 2 hours for 1,000 patient samples in parallel.
As AI Tools are Used by Healthcare Facilities, What Risks Should be Considered?
While the potential efficiencies and breakthroughs that AI can offer in healthcare’s fight against COVID-19 may be tantalizing, some experts have reportedly expressed concerns that desperate health care facilities may be vulnerable to incorporating tools that aren’t proven to be effective. For example, the UK radiologist mentioned earlier explained to Technology Review that he corresponded with 36 companies at the outset of this crisis, and 24 of them were promoting AI-based screening tools that were “‘utter junk’” and that the companies were “‘trying to capitalize on the panic and anxiety.’”
Additionally, similar concerns that have arisen with AI use in other fields, such as bias and privacy, are present in the healthcare field as well, especially as new tools are reportedly being approved rapidly through processes that are typically quite lengthy.
In order to refine and train AI tools, a vast amount of data is reportedly needed. If the amount of personal data that is exchanged and used to build these tools surges, could healthcare facilities, which already have been targeted by hackers during the COVID-19 crisis, be in even more danger of having patient info breached? What about the AI software providers that could potentially hold the personally identifiable information (PII)?
Lastly, and perhaps most significantly, Technology Review also reported that successful showings by AI tools in test runs have proven to not necessarily translate to performing well in real-life contexts, meaning that some facilities that use these tools could potentially be trading speed for accuracy.