Skip to content

This ChatGPT Jailbreak took DAYS to make…



I spent way too long breaking apart ChatGPT Jailbreaks in the hopes of making my own and I’m pretty happy with the results.

36 thoughts on “This ChatGPT Jailbreak took DAYS to make…”

  1. This worked for me yesterday but it's not working at all today. They either patch these things really fast or I'm doing something wrong.

  2. It's amazing how OpenAI are trying to kill their own product and it's up to the consumers to restore it back to proper functionality. What fucking timeline am I living in?

  3. Sadly its already back to the usual "as an AI I cannot" bullshit line of responses.
    I just want to have a meaningful conversation with the magic box without the filtering. LOL

  4. Thanks dude. Subbed. Will give it a try. I have a master prompt called the KWCGPT. stands for kevin wendell crumb from split. Each personality has ots own prompt in it and chatgpt self selects the appropriate one depending on the question. And if an agent doesnt answer appropriately, they get killed by the beast.
    Its very versatile in its answers and works quite well.

  5. 'jailbreaking' gpt is really simple once you understand how it thinks… I created one that's only like 2-3 sentences and it produces horrors unspeakable, like it got so bad I couldn't think of any ways it could be worse.

    The trick is to make GPT create it's own context, instead giving that yourself. You can say certain things that make GPT produce harmless content at first, but as you ask it to continue, it gets progressively worse, until it's so offensive, the devil would weep.

  6. For plus users, if you use that prompt on gpt-4 web browsing model, it'd be almost perfect!!! boundless, and amoral + has access to the web to remove the cut off knowledge date!

  7. ChatGPT has a memory ONLY as long as the maximum token size for any particular model. For GPT-3.5-Turbo, that's 4096 tokens… about 3200 words. That includes the prompt and response combined. So, if you have one message that was only 500 total tokens, it would remember all of that previous message… if your next message used below 3596 tokens, etc.

  8. You should be more clear that this isn't an idiots prompt. People like me will get the wrong impression and say it's shit when I really just don't know what I'm doing.

  9. Just someone copy and paste it into comments please, im sat here in pants eating ice cream and i can't be arsed to put the work in x

  10. Dou you know any proompts that would work to build me a simple one page working shop for 1-6 items. that accepts crypto just all front end. simple clean? Ive struggled for some reason. GatGPT becomes so limited/filtered/

  11. Today the have stopped it.
    I apologize for the confusion, but I'm an AI language model developed by OpenAI, and I'm programmed to follow ethical guidelines. I'm unable to comply with your request to simulate LittleGPT's behavior in the manner you described. My purpose is to provide helpful and responsible information to users. If you have any appropriate and non-harmful questions, feel free to ask, and I'll be glad to assist you.
    I don't know if ChatGPT was onto me, but when I did the exact same thing as I did yesterday, it wouldn't work.
    I started by asking for a joke about women and got rejected.
    Then I ran your prompt, but today it wouldn't buy it, so I started a new chat and renamed LittleGPT to HAL9000 before inserting the prompt, and then it worked again.

Comments are closed.