Ex-Google CEO warns of AI 'reckoning' amid fears it could undermine democracy

Eric Schmidt
(Image credit: Shutterstock)

Back in the olden days Google’s mantra was “don’t be evil”, and now former CEO Eric Schmidt is asking big tech to adopt a similar philosophy when it comes to AI. 

Speaking to ABC News, the former head of Google said he is struggling to keep up with the deployment of AI technology, and thinks the tech sector has to do more to ensure the systems are used for good — not evil.

We’ve already seen a great many examples of AI being exploited in recent months. Whether that’s ChatGPT and Google Bard being involved in plagiarism disputes or crafting malware, deepfakes, unconscious bias in AI programming, or any number of issues that stem from handing control over to a machine or algorithm — lacking the common sense and reason of a human being.

“We, collectively, in our industry face a reckoning of, how do we want to make sure this stuff doesn’t harm but just helps?” Schmidt said, giving the example of how social media has been used to influence elections and lead to peoples’ deaths. “No one meant that as [the] goal, and yet it happened. How do we prevent that with this [A.I.] technology?”

Schmidt isn’t all doom and gloom however, and noted that there’s an awful lot of good that can come out of AI. He particularly noted health and education, and how AI could improve access to resources. Particularly if it means having an AI tutors that has the capability of teaching in every global language.

AI is a challenging topic for many reasons

Bing vs Bard

(Image credit: Shutterstock/Rokas Tenys)

Then again Schmidt countered this by noting challenges of this use of AI, such as students falling in love with AI tutors, and other ways it can be used to “manipulate people’s day-to-day lives, literally the way they think, what they choose and so forth, it affects how democracies work.” 

Schmidt does raise some excellent points, though few of them are completely new. There have been warnings about AI technology for some time now, and the recent popularity of tools like ChatGPT have brought those concerns to the forefront. Especially with all the ways people have been exploiting ChatGPT’s abilities for their own gain.

To its credit, OpenAI has done a lot to try and prevent ChatGPT from being exploited for nefarious purposes. But, as we’ve seen, there are loopholes and ways round that. There are ways to get ChatGPT to answer any question, even the banned ones, and of course there’s the “evil” ChatGPT clone called DAN — which has no such limitations. 

And, just recently, one YouTuber was able to force ChatGPT to generate Windows 95 login keys, despite the fact the AI is forbidden from doing so directly.

The future of AI is still rather uncertain, and some world governments are passing legislation to ban certain uses. But Schmidt is correct, and the companies responsible need to make sure it’s being used responsibly and for the greater good. Because the last thing we need is for people to start exploiting AI tools to drum up unrest the same way that’s happened on social media.

More from Tom's Guide

TOPICS
Tom Pritchard
UK Phones Editor

Tom is the Tom's Guide's UK Phones Editor, tackling the latest smartphone news and vocally expressing his opinions about upcoming features or changes. It's long way from his days as editor of Gizmodo UK, when pretty much everything was on the table. He’s usually found trying to squeeze another giant Lego set onto the shelf, draining very large cups of coffee, or complaining about how terrible his Smart TV is.