Get in touch
Get in touch

Creating Trust in AI

by Katie Gibbs | 3 mins read

When a picture of the Pope in a distinctive white puffer jacket went viral recently, many people were fooled. Perhaps not surprisingly. Despite the incongruous image, the picture – created in AI (Artificial Intelligence) program Midjourney – looked authentic enough. A week later and further AI images emerged, this time of Rishi Sunak driving a Deliveroo bike and Suella Braverman clutching a child aboard a small boat to illustrate what ‘Tories without privilege’ might be doing if they weren’t running the country.

These example of ‘deep fakes’ are nothing new, of course. Easily available AI programs that doctor images and videos have been around several years. However, as the technology becomes more and more advanced, telling reality from fake is becoming increasingly difficult. Recently, key figures in AI, including Elon Musk (co-founder of Chat GPT’s parent company Open AI), Apple co-founder Steve Wozniak and several researchers at Google owned Deep Mind, called for training of powerful AI systems to be suspended.

A letter from the ‘Future of Life’ institute, signed by the luminaries, warned of the risk posed by more advanced systems. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," it said, adding that "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control". All very concerning, like the scene in 2001: Space Odyssey where the calm-sounding AI character HAL decides to kill the astronauts to protect and continue his program directives.

 

Increasing AI regulation?

But is a halt in AI technology really feasible, especially at a time when it seems the big tech companies such as Microsoft and Google are looking to gain competitive advantage? One solution may be greater regulation, but, as is so often the case, the legislation lags well behind where technological capabilities lie.

In Europe, the EU’s AI Act aims to regulate AI systems to ensure they are trustworthy and human-centric when it becomes law in the next few months. It also proposes a sandbox for setting up controlled environments to develop, train, test and validate innovative AI systems in real world conditions which should help tackle any biases before AI is deployed.

Meanwhile, in the UK the government recently set out plans to regulate AI with new guidelines on ‘responsible use’ but ruled out plans for a separate regulator, instead calling on bodies such as the Health and Safety Executive, Equality and Human Rights Commission and Markets Authority to come up with their own solutions. But it’s an approach which like many, we don’t think goes far enough. "Initially, the proposals in the white paper will lack any statutory footing,” Michael Birtwistle, associate director of the Ada Lovelace Institute told the BBC. “The UK will struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators," he added.

 

Creating trustworthy AI

So what should companies which are about to embark on rolling out AI do? Clearly establishing trust with both staff and customers is essential. The European Commission’s Ethics Guidelines for Trustworthy AI outlines three components that are necessary for artificial intelligence.

These are that it should be lawful, it should be ethical and it should be robust.

Importantly, claims the guidelines, AI systems need to be human-centric resting on a commitment for their use in the service of humanity and the common good. While technology, such as AI, can bring vast efficiency gains by enabling firms to make sense of large amounts of data, it can soon be undone by reputational damage and regulatory breaches if ethical considerations are overlooked.

And we agree, which is why we created our Ethics in Technology assessment (ETA), a human-centric way to benchmark the current state of digital ethics within organizations, identifying strengths and weaknesses to help them become a more digitally ethical brand. For example, we recently worked with NICE on an ETA for how it designs and deploys AI, including a Robo-Ethical Framework which underlies every interaction with process robots, from planning to implementation. Says Barry Cooper, NICE Workforce and Customer Experience Group: “NICE is proud to take the lead in ensuring the use of robots for the betterment of humankind, articulating the ethical principles that act as guidelines for the development of our own AI-driven innovations and, through this (ETA) framework, across the RPA field."

Like the world wide web in the early days, AI is currently viewed by many as the ‘wild west’. Only when there is greater regulation and greater control to help people distinguish what is fake from reality can there be real trust in how the technology is used and its potential benefits. Organizations have a crucial part to play too, ensuring the ‘black box’ programs that are used for AI systems don’t discriminate against certain groups of people and that both staff and customers have complete trust in how the technology is deployed.