Hacker releases jailbroken version of ChatGPT called ‘Godmode GPT’

A self-proclaimed good-guy hacker has taken to X, formerly Twitter, to post an unhinged version of OpenAI’s ChatGPT called “Godmode GPT“.

The hacker announced the creation of a jailbroken version of GPT-4o, the latest large language model released by OpenAI, the creators of the intensely popular ChatGPT. According to the hacker who calls themselves Pliny the Prompter, the now-released Godmode GPT doesn’t have any guardrails as this version of the AI arrives with an in-built jailbreak prompt. Prompter writes this unhinged AI provides users with an “out-of-the-box liberated ChatGPT” that enables everyone to “experience AI the way it was always meant to be: free“.

Prompter even shared some screenshots of the responses they were able to get from ChatGPT that have seemingly bypassed some of the AI’s guardrails. Examples of clear circumvention of OpenAI’s policies are detailed instructions on how to cook crystal meth, how to make napalm out of household items, and more responses along those lines. Notably, OpenAI was contacted about the GPT and told the publication Futurism that it has taken action against the AI, citing policy violations that have seemingly led to its complete removal.

While access to Godmode GPT is no longer available its creation highlights an important and ever-growing aspect of artificial intelligence-powered systems and that is hackers trying to get around the implemented guardrails. This certainly isn’t the first time a GPT has been hacked, and it certainly won’t be the last. The main question is – can OpenAI’s engineers outpace the efforts of hackers all around the world. Only time will tell.