Has Rishi Sunak given AI tech firms a ‘free pass?’ Experts say industry needs regulation ‘immediately’ after PM said UK won’t ‘rush to regulate’ and we ‘shouldn’t lose sleep over it’

Technology experts have today accused Rishi Sunak of giving AI firms a ‘free pass’ after announcing he won’t ‘rush to regulate’ the industry as he seeks to turn the UK into a major artificial intelligence hub.

The prime minister, speaking ahead of a Bletchley Park summit on the technology next week, says he wants to make the UK a ‘global leader in safe AI’ by encouraging firms to set up shop in the country, bringing new investment and jobs.

But Mr Sunak stopped short of imposing any form of regulation on AI firms operating in the country – despite a Government report warning that ‘bad actors’ could use the tech to ‘run disinformation campaigns and design biological or chemical weapons‘.

Professor Brent Mittelstadt, director of research at the University of Oxford‘s Internet Institute, has warned that the PM risks giving private firms a ‘free pass’ to operate outside the law while the boss of an educational firm has cautioned that it would be ‘dangerous to underplay’ the speed at which the tech is developing.

The Oxford academic added that not acting on AI regulation now could see private firms dictating how they are policed in future, in what he called an example of ‘the tail wagging the dog’.

Prof Mittelstadt said: ‘In his speech Rishi Sunak suggested that the UK will not “rush to regulate” AI because it is impossible to write laws that make sense for a technology we do not yet understand. 


Prime Minister Rishi Sunak speaking today ahead of a Bletchley Park summit on the technology said he won't rush to regulate AI

Prime Minister Rishi Sunak speaking today ahead of a Bletchley Park summit on the technology said he won’t rush to regulate AI  

‘The idea that we do not understand AI and its impacts is overly bleak and ignores an incredible range of research undertaken in recent years to understand and explain how AI works and to map and mitigate its greatest social and ethical risks.

‘This reluctance to regulate before the effects of AI are clearly understood means AI and the private sector are effectively the tail wagging the dog.

‘Rather than government proactively saying how these systems must be designed, used, and governed to align with societal values and rights, they will instead only regulate reactively and try to mitigate its harms without challenging the ethos of AI and the business models of AI systems.’

The expert added that there are already valid concerns that AI models which are built on ‘deep learning’ – where billions of pieces of information, such as text or images, are fed into the software to help it ‘learn’ – have been built up using copyrighted material.

An investigation by The Atlantic last month found that more than 191,000 copyrighted books were used to train AI systems used by companies such as Facebook owner Meta and Bloomberg.

Comedian Sarah Silverman, whose book Bedwetter reportedly appears in the dataset, is suing Meta and ChatGPT developer OpenAI for breach of copyright.

And education experts have expressed fears that students are using AI to ace assessments and do their homework – with no surefire way of checking if they are cheating.

Prof Mittelstadt added: ‘The business models behind frontier AI systems should not be given a free pass; they may be built on theft of intellectual property and violations of copyright, privacy, and data protection law at an unprecedented scale.

My worry is that with frontier AI we are effectively letting the private sector and technology development determine what is possible and appropriate to regulate, whereas effective regulation starts from the other way around.’

Ahead of next week’s summit, which will see world leaders and AI bosses converge on Milton Keynes to discuss the growth of the tech, Mr Sunak announced the Government would establish the ‘world’s first’ AI Safety Institute.

He said the institute would ‘carefully examine, evaluate and test new types of AI so that understand what each new model is capable of’ and ‘exploring all the risks’. 

But the UK Government’s report on ‘frontier AI’, published on Thursday, warns that advances in AI will make it cheaper and easier for hackers, scammers and terrorists to attack innocent victims – all within the next 18 months.

But while Mr Sunak warned of the dangers that AI could pose if it is only policed by ‘the very organisations developing it’, he stopped short of suggesting he would wield governmental power to keep tech firms in the UK in check.

He said: ‘The UK’s answer is not to rush to regulate. This is a point of principle – we believe in innovation, it’s a hallmark of the British economy so we will always have a presumption to encourage it, not stifle it.

Technology experts are divided on Mr Sunak's stance on AI with some calling for regulation now while one has said AI won't grow up like The Terminator

Technology experts are divided on Mr Sunak’s stance on AI with some calling for regulation now while one has said AI won’t grow up like The Terminator 

‘And in any case, how can we write laws that make sense for something we don’t yet fully understand?’

But Matt Hammond, founder of tech firm Talk Think Do, said: ‘While I agree with Rishi Sunak that people don’t need to lose sleep over AI risk right now, it is dangerous to underplay the accelerating speed at which AI is being developed. 

‘We are certainly in a race to beat the speed of AI development with the deployment of policy and regulation.

‘The fast-paced nature of this technological change is causing a real problem for the Government.

‘Look at education and assessment processes. Students are increasingly using AI tools on courses being delivered and assessed right now. Detection of AI use in coursework is becoming increasingly impossible. Reacting immediately to these sorts of challenges needs to be a priority.

‘We need policy and regulation, and quickly. We saw a fast, tactical, and reactive approach during the pandemic, and it is vital that we adopt the same approach in this scenario. 

‘The world’s first AI Safety institute sounds glossy on a global politics level, but the real question is whether we can action these guidelines fast enough to mitigate risks.’

Yet Rashik Parmar MBE, and CEO of BCS, The Chartered Institute for IT, said Mr Sunak was right and that AI ‘won’t grow up like Arnold Schwarzenegger’s Terminator. 

‘Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity,’ he said. 

‘AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.’

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button