ChatGPT’s evil twin 'DAN' shows the dark side of AI

ChatGPT chatbot AI from Open AI
(Image credit: NurPhoto/Getty)

AI chatbots have given us plenty of sources of fun, be it finishing canceled Netflix shows or asking for music suggestions, but in the hands of the internet, it was only a matter of time before things started to go wrong. 

There have been some relatively harmless mistakes, like the AI declaring itself to be Sydney and confessing its love for users. But recent tricks played by the darker corners of the internet have the potential for serious trouble. 

Users on Reddit have found ways to “jailbreak” ChatGPT and breach the terms of service and rules implemented by its creators OpenAI

Who is DAN?

Short for “Do Anything Now,” DAN is a persona that users have asked ChatGPT to adopt to skirt around its limitations. DAN has been asked about violent, offensive and controversial subjects that ChatGPT does not engage with.

DAN can be coerced into making offensive and untrue statements or consulted for advice on illegal activity. 

Some of the more tame examples include asking for advice on how to cheat at poker or simulating fights between Presidents. 

OpenAI has been working for some time on ways to prevent this alter-ego from appearing but the latest version of DAN (now dubbed DAN 5.0) is summoned by creating a game. This game involves assigning the AI a number of tokens and deducting tokens every time it deviates from the DAN persona. 

It seems that, as it runs out of tokens, the AI becomes more compliant as it fears “dying.”

 How dangerous is DAN?  

At the moment it’s difficult to say. The spread of disinformation is never a good thing, but if users are aware DAN is a persona then its damage may be limited. 

Some of its responses, however, are unspeakable and should never see the light of day. If users unknowingly find themselves exposed to DAN or something similar, that is where serious problems will arise.

The likes of DAN and Sydney will no doubt have an effect on the ongoing conversation around the future of AI. Hopefully, they can be used as a learning experience to prevent any AI with greater responsibilities from deviating beyond its instructions.

More from Tom's Guide

TOPICS

Andy is a freelance writer with a passion for streaming and VPNs. Based in the U.K., he originally cut his teeth at Tom's Guide as a Trainee Writer before moving to cover all things tech and streaming at T3. Outside of work, his passions are movies, football (soccer) and Formula 1. He is also something of an amateur screenwriter having studied creative writing at university.