UK AI Safety Summit is targeting evil sentience, but there are bigger problems to solve
World leaders and technology bosses debate the risks of next-generation AI models
World leaders are meeting with some of the biggest tech companies to debate how to protect the world from a future sentient Artificial intelligence — featuring an 'in conversation' event with X (Twitter) CEO Elon Musk afterward.
The UK AI Safety Summit includes the next-generation models from the likes of OpenAI, Anthropic and Google, which may have the ability to reason and not just regurgitate data.
The event is being held at Bletchley Park in the southeast of England — the home of the WW2 codebreakers and one of the birthplaces of modern computing. Its laudable objectives are primarily focused on forming international agreements on how to collaborate, report and minimize risks posed by future AI tools. But some experts have said there needs to be more attention to current models.
Every country is exploring the best way to regulate AI, both currently in-use models and those in the far future with a brain of their own. The most recent update saw President Joe Biden signing an executive order setting out detailed plans for the technology this week.
In conversation with @elonmusk After the AI Safety SummitThursday night on @x pic.twitter.com/kFUyNdGD7iOctober 30, 2023
UK AI Safety Summit: What’s the focus?
Announced by the U.K. Prime Minister Rishi Sunak in June, the aim of the summit is to bring various governments, tech companies, academics and third-sector organizations together to discuss how best to collaborate on regulation, guardrails, and standards.
Initially, it was assumed this would cover all aspects of AI. Still, in response to lobbying from the likes of OpenAI and Google, it was shifted to so-called Frontier models - those models with human and post-human capabilities, up to and including Artificial General Intelligence (AGI).
The fear of risks posed by AGI going rogue and being used in ways that are harmful to humanity as a whole is behind the narrow focus of the summit. In its guide to the summit, the U.K. government Department for Science, Innovation and Technology wrote that the “capabilities of these models are very difficult to predict – sometimes even to those building them - and by default they could be made available to a wide range of actors, including those who might wish us harm.”
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
It goes on to say that the pace of change in AI development, particularly with the models expected to launch next year with video, audio, image and text capabilities, is so rapid that immediate action is needed on AI safety. The government argues that this needs to be a global action.
Previous studies into the impact of misaligned AGI models, such as those Frontier AI models covered by the summit, could see them deployed to take control of weapons systems or accurately spread targeted misinformation during an election. But the risk is more immediate. A recent study by MIT found that releasing the weights of current models such as Meta’s Llama 2 could give criminals unrestricted access to tools that can design new viruses and information on how to most efficiently spread those viruses.
These models, with the weights that give it instructions on how to use information it was trained on, Llama 2 can be run on local hardware or in data centers controlled by a criminal organization.
Some of these risks will be addressed at the Summit, but the primary focus will be on the big AI models of the future. It will also apparently ignore the risk of copyright infringement, bias in training data and the ethical use of narrow models in CV sifting, facial recognition and education.
AI dangers: There's bigger things to worry about
Ryan Carrier, CEO of the AI certification and training organization forHumanity told me there were plenty of other pressing issues to address before AI becomes sentient.
Carrier went on to outline some of the more pressing issues including ensuring ethical use of data, and reducing the risk of embedded discrimination in the training datasets. Other issues include the “failure to uphold IP rights, failure to protect data and privacy, insufficient disclosure of risk, insufficient safety testing, insufficient governance, and insufficient cybersecurity to name a few.” All of this, he says, adds up to a pressing problem that needs attention today, ahead of a future hypothetical risk tomorrow.
Some experts, including Stanford University machine learning professor Andrew Ng, who taught OpenAI CEO Sam Altman, argue that the focus on the threat of AI is a ploy from Big Tech to shut down competition. "The idea that artificial intelligence could lead to the extinction of humanity is a lie being promulgated by big tech in the hope of triggering heavy regulation that would shut down competition in the AI market, one of the world’s top AI experts warned,” he argued in an interview with Financial Review.
He expressed concern that the focus of regulation from the likes of the Biden Executive Order and the EU AI Act will be more harmful to society than no regulation at all. Ng said: “AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”
It is likely the regulatory train has already gained too much speed to stop or even slow down. While events like the UK AI Safety Summit are just a place to talk, the focus on frontier models, the fact the invite list leans heavily towards Big Tech, and the exclusion of open source suggests minds have already been made up in the corridors of power.
More from Tom's Guide
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?