Blog

Balancing AI innovation with regulation

Written by Katie Gibbs | Jul 24, 2023 9:18:59 AM

Scare stories about the threats of AI are leading governments to act, but AI can be a force for good too, says Global Head of Consulting, Katie Swannell-Gibbs – providing regulations are put in place.

AI horror stories have dominated the headlines of late. In May, the Center for AI Safety, comprising leaders from ChatGPT, Google’s AI lab and Anthropic, all signed an open letter claiming that “mitigating the risk of extinction from AI should be a global priority alongside…pandemics and nuclear war,” while Dr Geoffrey Hinton – widely regarded as the godfather of AI – quit his role at Google, describing AI chatbots as ‘quite scary’.

More recently, money saving expert Martin Lewis told the BBC that he was left ‘feeling sick’ by a deepfake video featuring a realistic computer generation of him supposedly promoting an investment scheme. Lewis, who sued Facebook in 2018 over fake ads using his name, called for the government to press ahead with AI legislation stating that “we are scared of big tech in this country and we need to start regulating them properly.”Deadline.

Separately, US comedian and author Sarah Silverman recently announced she is suing ChatGPT creator OpenAI and Mark Zuckerberg’s Meta, along with two other writers, over claims that their AI models were trained on her work without permission.

Force for good

It is against this backdrop that the UK government has finally decided to act. In June prime minister Rishi Sunak said in a speech at London Tech Week that he wanted ‘to make the UK not just the intellectual home but the geographical home of global AI safety regulation,’ before announcing that the first global AI safety summit would take place in the UK later this year.

Alongside this news, Ian Hogarth, co-founder of concert discovery service Songkick, was chosen as head of the Government’s new AI Foundation Model Taskforce, warning a month later that it was ‘inevitable’ that more jobs would become increasingly automated.

Yet dig beneath the headlines and you soon realise it’s not all bad news. Sure, there are understandable concerns around how the technology is being misused by criminals (the creation of hyper-realistic videos of children being abused being one extreme and upsetting example).

 

However, as with all previous technologies (including the internet itself) AI can be used for nefarious purposes if suitable controls aren’t put in place. Against this we need to balance all the good AI can do such as helping doctors diagnose cancers more accurately as well as ensuring pharmaceutical companies can develop new types of medicines more quickly than before.

 

Creating new jobs

Regarding AI’s impact on the workforce, inevitably automation will replace some jobs (blue collar and white collar for that matter), but it will also create many more too. points out, jobs ‘displaced by automation have been offset by the creation of new jobs,’ with around 60% of today’s workers employed in occupations that didn’t exist in 1940.

Earlier this week, the BCS – the Chartered Institute for IT – issued an open letter signed by over 1300 experts, saying that AI is a ‘force for good, not a threat to humanity’. One of the signatories, Richard Carter, who founded an AI-powered startup cybersecurity business, told the BBC he thought the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible".

 

Ethical framework

 

Clearly what’s required is that industry regulations and/or legislation are put in place to govern how AI is being used. Last month saw the EU pass the final draft of its AI Act which will require developers of generative AI to disclose the sources of its training data to protect IP and privacy. The World Ethical Foundation has also just issued a framework of 84 questions for developers to consider at the start of each AI project. Its questions for developers include how they can prevent an AI product from incorporating bias and how they would deal with a situation in which the result generated by a tool results in law-breaking.

 

For organizations looking to implement automation/AI solutions to reduce cost and drive efficiencies, there are clear benefits of the new technology. However, it will become increasingly important to put systems in place to ensure that the new technology is rolled out as ethically and fairly as possible. As Clare Walsh, Director of Education at the Institute of Analytics, recently told AI Business magazine: “My advice to SMEs is to assume that the law will eventually catch up with negative practices. The information commissioner is very clear there will be no place to hide from reckless innovation.”

To find out how Cognition can help with your AI strategy, contact katie.gibbs@cognitionhq.com or fill in the form HERE