<img src="https://secure.7-companycompany.com/796392.png" style="display:none;">
Get in touch
Get in touch

COVID-19 measures are shining a spotlight on the pros and cons of AI implementation

by Katie Gibbs | 4 mins read

Coronavirus has highlighted the importance of modern technologies. From Zoom to Miro, many of us wouldn’t be able to work remotely without these tools in place, and it’s been great to see them stepping up to the mark to make their tools readily available to support people and organisations during this time. Effectively, technology has enabled the largest telecommuting shift in history. As Emergence explored in last week’s webinar with Everest, as a result of the crisis, leaders have the opportunity to implement – and accelerate – visions that will future-proof and re-imagine their operating models and businesses.

However, when it comes to Artificial Intelligence (AI), it’s less clear-cut as to whether it is adding value or creating more confusion and panic. On the one hand, we’ve seen the healthcare industry exploring how AI can help in the fight against the pandemic. Usually it would be considered extremely risky to experiment with new technologies in such a rapid manner, yet this new experimental approach in these urgent times seems to be paying off. AI has been employed to search for new molecules capable of treating coronavirus, with a potential treatment for COVID-19 identified using BenelovantAI’s Artificial Intelligence drug discovery and development platform entering randomised clinical trials earlier this month - an important step in getting medicines to market that can prevent life-threatening respiratory and other serious complications of COVID-19 infections while we wait for a vaccine to be developed. AI was also used in China to search lung CT scans for symptoms of coronavirus to accelerate diagnosis.

AI even played its part in detecting the spread of COVID-19 upfront; BlueDot have developed an algorithm that scours global news reports and airline ticketing data to predict the spread of the disease. With Chinese health officials being slow to disclose information, and the World Health Organisation (WHO) relying on these officials for disease monitoring, BlueDot sent word of the outbreak over a week ahead of the WHO. In another important example, AI is being used to develop tools to analyse and categorise large volumes of academic papers, helping COVID-19 researchers piece together a growing puzzle that spans thousands of peer-reviewed journals. 

Humans are still the heroes

These use cases are sprouting up because human resources are stretched, overworked and under huge amounts of pressure. A helping hand is required, be it human or otherwise. However, the key to the success of such AI tools in these areas is that they’re not operating unsupervised – there are humans analysing the outputs of AI algorithms, overlaying their own expertise in order to derive value, and, ultimately, saving time to maximise the utilisation of their own uniquely human skills. 

The pandemic has served as a reminder of how important humans are to continue to deliver vital services, be that delivering stock to supermarkets or treating patients in hospital. For years the debate about whether AI will replace humans has raged on, yet Covid-19 has refocused what’s important and what makes organisations stand out, and it’s the effort and empathy that staff offer. In other words, humans are the heroes again, and it’s vital that AI technologies are deployed to support people in doing their job to the best of their ability, rather than the other way around.

A pandemic is not an excuse for AI to breach civil liberties

These are exceptional times, but that does mean we should allow exceptions to the rules that govern society. Certain applications of AI that have cropped up in response to the pandemic are cause for concern. Facial recognition companies are touting their tools as the most hygienic form of validating a person’s identity with biometrics, and have taken advantage of the situation to adapt and expand their software to identify people even when wearing face masks. This feels like a worrying step toward breaching civil liberties - an issue that the general public were already increasingly uneasy with when police forces increased the use of facial recognition prior to lockdown.

Some companies have taken this a step further by combining facial recognition with thermal imaging to determine whether someone is likely to have been infected with coronavirus or not. This raises concerns for various reasons. We may see a faster adoption rate of this relatively immature technology as a rapid response to the current crisis, meaning organisations won’t have the time or capabilities to implement the parameters needed to prevent bias and discrimination and consider the risks that this technology poses. Governance of facial recognition is still in its very early stages, as evidenced by the news that Washington has issued regulation - the first US state to do so. 

China have already reportedly been using facial recognition for racial profiling to target and discriminate against ethnic minorities, facilitated by new requirements that everyone in the country must scan their face in order to access the internet. One can only assume that coupling thermal imaging with facial recognition that works despite face masks being worn will see a marked increase in levels of discrimination, whereby anyone that’s been ‘detected’ as having symptoms, no matter how questionable the technology is, will be subjected to penalties, social exclusion, and job losses. With facial recognition and its regulation still maturing, and considering the privacy rights that it has the potential to breach, this doesn’t feel like the right time to experiment with this technology.

Don’t be blinded by discounted prices; think long-term

There are plenty of AI companies eager to showcase their value during this crisis and come out the other side with their ‘war story’ of how they contributed to the fight. I’ve seen an influx of AI product companies offering their software for free during the crisis to make it accessible, which on the face of it is an admirable gesture, yet raises concerns in the longer term; namely, what will their charging model be in the future once you’ve implemented their AI platform free of charge during the pandemic? These companies usually charge on a subscription basis and are experts in landing and then expanding their presence. It’s likely that they will charge after the first year and continue to seek out opportunities, as their sales team have been hardwired to do.

This may seem like a good deal, and their response may seem reasonable considering they have their own targets to meet to keep the lights on, yet the problem that this creates is that there’s even more noise and hype in an already noisy and overhyped marketplace. Tempted by slashed prices, organisations could end up rushing into adopting new technologies without carrying out the due diligence to validate whether it’s the right immediate solution for them, and whether it will deliver value for them in the long-term. 

Despite the sense of urgency in finding solutions in these uncertain times, and the head start that AI can offer organisations when used to support human expertise, AI still poses the same risks that it always has done around hype, bias and privacy. We should be wary of adopting AI technologies without a clear understanding of how they work, the value they bring and the risks that they pose to make informed decisions. After all, the technology that we implement during this crisis will stay with us into the future, and we have a responsibility to ensure that we’re adopting the right technologies in the right way, to protect customers and citizens, and to deliver lasting value to build our technological resilience for future crises.