Samsung accidentally leaked its secrets to ChatGPT — three times!
Samsung learns a lesson in security
It appears that like the rest of the world, Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times.
The potential for AI chatbots in the coding world is significant and Samsung has, until now, allowed staff in its Semiconductor division to use OpenAI’s bot to fix coding errors. After three information leaks in a month, expect Samsung to cancel their ChatGPT Plus subscription. Indeed the firm is now developing its own internal AI to assist with coding to avoid further slip-ups.
One of the leaks reportedly concerns an employee asking ChatGPT to optimize test sequences for identifying faults in chips, an important process for a firm like Samsung that could yield major savings for manufacturers and consumers. Now, OpenAI is sitting on a heap of Samsung’s confidential information — did we mention OpenAI is partnered with Microsoft?
While this is quite a specialized case, another instance is something ordinary folk should be wary of. One Samsung employee asked ChatGPT to turn notes from a meeting into a presentation, a seemingly innocuous request that has now leaked information to several third parties. This is something that we should all consider when using ChatGPT and Google Bard, and with AI’s rapid rise, there is little legal precedent to rely on.
In its Privacy Policy (which Samsung hopefully read in full) OpenAI mentions that “when you use our Services, we may collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services”. OpenAI also reserves the right to use personal information gathered “for research purposes” and “to develop new programs and services.”
How secure is ChatGPT?
OpenAI makes no secret of the fact that ChatGPT retains user input data — it is after all one of the best ways to train and improve the chatbot.
While most of us are unlikely to leak confidential information from a multi-billion dollar company there are also individual privacy concerns. AI chatbots have grown so fast that there is little regulation. This is all the more worrying with Microsoft’s ambitions to integrate ChatGPT into Office 365, a platform millions use at work every day.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
There are also concerns in the EU that ChatGPT goes against GDPR and Italy has already completely banned it, although this has just driven Italians to VPNs. For now, users will have to use their own judgment and avoid disclosing any personal information when they can.
More from Tom's Guide
Andy is a freelance writer with a passion for streaming and VPNs. Based in the U.K., he originally cut his teeth at Tom's Guide as a Trainee Writer before moving to cover all things tech and streaming at T3. Outside of work, his passions are movies, football (soccer) and Formula 1. He is also something of an amateur screenwriter having studied creative writing at university.