Samsung bans employees from using ChatGPT and Google Bard — here's why
Previous ChatGPT gaffes have now led the company to ban the chatbot
ChatGPT can be a great work tool, especially if you know the best ChatGPT tips and tricks. But unfortunately, putting in that work data can have some unintended consequences. Samsung employees found this out the hard way last month when they accidentally leaked Samsung’s secrets to ChatGPT multiple times.
Now it looks like Samsung is taking steps to ensure this never happens again. According to Bloomberg’s Mark Gurman, Samsung has now banned employees from using generative AI tools such as ChatGPT or Google Bard. This comes from a leaked memo to Samsung staff that laid out new policies on AI use in the workplace last week, which Samsung has since confirmed.
Apparently this probably wasn’t a shock to Samsung employees — in fact, it may have even been a welcome development for some. Following the unintended data leaks, Samsung reportedly ran an internal survey and found that 65% of respondents agreed that generative AI and similar tools pose a serious security risk.
Samsung ChatGPT leak: What happened?
Back in April, it was reported by us and other outlets that Samsung employees had been using the popular AI chatbot to (among other things) fix coding errors. Specifically, members of the semiconductor division used the AI tool to identify faults in its chips. Unfortunately for Samsung, that data is now part of the vast trove of data that ChatGPT’s GPT-4 model is trained on — though the leaked data has yet to surface so far.
But that wasn’t the only Samsung leak. In a separate instance, a Samsung employee used ChatGPT to turn meeting notes into a presentation, a feature that is common with generative AI and is even a highlight of tools such as Microsoft Copilot 365. Unfortunately, again, this data then became part of the user data OpenAI collects (which they explicitly state in their terms of service) and is now at risk of being divulged to the public. Luckily for Samsung, it seems that this data has still evaded the public eye — for now.
How to stay safe using ChatGPT
If you want to stay safe using ChatGPT, Google Bard or Microsoft’s Bing with ChatGPT — really any AI tool — the key is to remember that this data is almost always stored somewhere. There are some AI tools that store data locally, but for the most part, the data is stored on a server somewhere once it’s entered into the chatbot.
The good news is that companies are starting to change how they handle some of this data. ChatGPT in particular now has the option to disable chat history and training on ChatGPT, which deletes your conversations after 30 days. Still, the best method is simply to never tell (or type) the chatbot something you’d be uncomfortable with other people knowing. In fact, that’s just a good rule for the internet in general.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
More from Tom's Guide
Malcolm McMillan is a senior writer for Tom's Guide, covering all the latest in streaming TV shows and movies. That means news, analysis, recommendations, reviews and more for just about anything you can watch, including sports! If it can be seen on a screen, he can write about it. Previously, Malcolm had been a staff writer for Tom's Guide for over a year, with a focus on artificial intelligence (AI), A/V tech and VR headsets.
Before writing for Tom's Guide, Malcolm worked as a fantasy football analyst writing for several sites and also had a brief stint working for Microsoft selling laptops, Xbox products and even the ill-fated Windows phone. He is passionate about video games and sports, though both cause him to yell at the TV frequently. He proudly sports many tattoos, including an Arsenal tattoo, in honor of the team that causes him to yell at the TV the most.