OpenAI has been actively banning users if they’re suspected of malicious activities

ChatGPT logo on a smart phone resting on a laptop keyboard, lit with a dark purple light
(Image credit: SOPA Images / Contributor via Getty Images)

OpenAI has removed numerous user accounts globally after suspecting its artificial intelligence tool, ChatGPT, was being used for malicious purposes, according to a new report.

Scammers have been using AI to enhance their attacks, OpenAI notes in a new report outlining the AI trends and features that malicious actors are employing, including case studies of attacks that the company has thwarted. Surpassing over 400 million weekly active users, ChatGPT is freely accessible globally.

In its report, OpenAI says it repeatedly "saw threat actors using AI for multiple tasks at once, from debugging code to generating content for publication on various distribution platforms."

"While no one entity has a monopoly on detection, connecting accounts and patterns of behavior has in some cases allowed us to identify previously unreported connections between apparently unrelated sets of activity across platforms," it wrote.

Among the cases OpenAI has disrupted, the company recently banned a ChatGPT account that generated news articles that denigrated the US and were published in mainstream news outlets in Latin America under a Chinese company's byline.

Further, the company also banned accounts, supposedly originating from North Korea, that used AI to generate resumes and online profiles for fictitious job applicants. The company speculated that these profiles were created in hopes of getting jobs at Western companies.

In another instance, OpenAI identified a group of accounts potentially linked to Cambodia, using the chatbot as a means to translate and generate comments for a "romance baiting" scam network across social media and communication platforms, including X, Facebook and Instagram.

The report itself outlines several other instances blocked by the company; however, it does not specify how many "dozens" of accounts in total were removed, or the time frame in which they occurred.

OpenAI's outlook on scams and fraudulent uses of ChatGPT

ChatGPT logo on a smartphone screen being held outside

(Image credit: Shutterstock)

While OpenAI has been on the front foot in stopping these malicious uses of ChatGPT, the company has also reiterated that it won't tolerate the misuse of its technology.

"OpenAI's policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts," it wrote.

Through sharing insights with industry peers such as Meta, the company hopes to enhance "our collective ability to detect, prevent, and respond to such threats while advancing our shared safety".

TOPICS
Lucy Scotting
Staff Writer

Lucy Scotting is a digital content writer for Tom’s Guide in Australia, primarily covering NBN and internet-related news. Lucy started her career writing for HR and staffing industry publications, with articles covering emerging tech, business and finance. In her spare time, Lucy can be found watching sci-fi movies, working on her dystopian fiction novel or hanging out with her dog, Fletcher.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.