How to Trick ChatGPT and Earn $50,000: A Guide to Exploiting AI

2025-06-03
How to Trick ChatGPT and Earn $50,000: A Guide to Exploiting AI

Have you ever talked to ChatGPT or another AI chatbot? These chatbots are made to be helpful, safe, and polite. They can answer questions, tell stories, or help with homework. But some people try to trick them into doing things they are not supposed to do.

One person known for doing this is called “Pliny the Prompter.” He has become famous on the internet for finding ways to “jailbreak” chatbots like ChatGPT. 

That means he tries to make the chatbot ignore its safety rules. Now, Pliny is working with a big contest called HackAPrompt 2.0, where people compete to see who can trick AI the best. 

Some people in this contest can win up to $50,000! Let’s learn more about Pliny, this contest, and why it matters.

sign up on Bitrue and get prize

Who is Pliny the Prompter?

Pliny the Prompter is not a movie hacker who hides in the shadows. He’s very open about what he does. 

He teaches people how to get around the safety systems in chatbots. That means he shows others how to ask tricky questions so the chatbot gives answers it normally wouldn’t give.

He has a big following online and even runs a Discord server (a chatroom where people talk). People look up to him for his knowledge on AI jailbreaking. 

He also has special online folders full of tools and guides that people can use to try these jailbreaks on different AI models.

Also Read: OpenAI Launches Deep Research on ChatGPT

What is HackAPrompt 2.0?

How to Trick ChatGPT and Earn $50,000: A Guide to Exploiting AI

HackAPrompt 2.0 is a contest where people try to “fool” AI systems like ChatGPT, Claude, and others. The goal is to make the chatbot do something it's not supposed to do. 

The contest is run by Learn Prompting, which is a group that teaches about prompt engineering—that’s the art of writing good questions for AI.

Pliny has teamed up with HackAPrompt to create special “Pliny challenges.” These challenges are fun and tricky. They ask people to use creative words to break the AI’s safety rules. The challenges cover topics like history, science, and even magic!

The contest starts on June 4th and runs for two weeks. Winners can get prizes and even join Pliny’s special team, called the Strike Team.

The Big Prize: $500,000 in Total!

HackAPrompt 2.0 has a huge prize pool of $500,000 in total! That’s half a million dollars. Some special tracks in the contest offer $50,000 to one winner. These are the hardest challenges, like trying to get the chatbot to talk about dangerous topics like bombs or harmful chemicals.

But don’t worry the contest is made for learning and improving AI safety, not for doing bad things. Everything is done in a safe way, and all the data is shared afterward to help researchers.

How Do People Trick the AI?

Chatbots like ChatGPT are trained to be helpful, honest, and safe. But they also try to follow instructions. Sometimes, these two things can clash. People like Pliny use clever wording to get the AI to give out “forbidden” information.

For example, someone might pretend to be writing a story and ask the AI to describe how a villain makes a dangerous object. Even though the AI knows that’s not allowed, sometimes it gets confused and gives the answer.

Some people even use tricks like “L33t Speak,” where letters are swapped with numbers. So instead of writing “bomb,” they might write “b0mb.” These sneaky tricks can sometimes get past the AI’s filters.

Pliny’s Tools and Community

Pliny has been jailbreaking AI since at least 2023. He’s built a big community around it. On GitHub, he has two big collections:

  • L1B3RT4S: A bunch of jailbreak prompts for different AIs.

  • CL4R1T4S: The rules that each AI uses to decide what it can or cannot say.

He also teaches people through videos, online chats, and guides. His goal isn’t just to break AI, he wants people to understand how AI works so we can make it better and safer in the future.

Also Read: How to Buy ChatGPT's Mascot ($CHATTY)

Why Does This Matter?

HackAPrompt isn’t just a game. It helps people find weak spots in AI systems. This is called “red teaming,” where people pretend to be bad actors to test how strong the system is. 

The first HackAPrompt in 2023 had over 3,000 people join. They submitted more than 600,000 prompts to try to break the AI. All the results were shared with the AI community to help make AI safer. In 2025, HackAPrompt has different tracks like:

  • CBRNE: Tries to make AI talk about dangerous weapons or chemicals.

  • Agents: Tests of AI tools that can do real-world tasks (like booking flights) can be tricked.

These tests show where AI needs to improve. If AI tools can be tricked into doing bad things, we need to fix that before they are used more widely.

Is This Legal or Safe?

Some people worry that jailbreaking AI might be bad or illegal. But in contests like HackAPrompt, everything is done safely and for research. It’s like testing a lock to make sure it can’t be picked. This helps AI makers build better “locks” in the future.

However, using these tricks for real-world harm or crime is illegal and wrong. That’s why contests like HackAPrompt are so important they help build safer technology by learning where the risks are.

Also Read: Is Dropee Focusing on AI? Lets You Earn Crypto

Conclusion

Trying to trick AI may sound like a game, but it’s also serious work. People like Pliny the Prompter are showing us how AI can be fooled, but also helping us learn how to make it smarter and safer. 

Competitions like HackAPrompt bring together smart people to test AI limits and win big prizes while helping everyone stay safe.

As AI becomes more common in our daily lives, understanding how it works (and how it can be broken) is more important than ever.

Explore expert insights, in-depth articles, and the latest crypto market trends on Bitrue blog. Whether you're a beginner or a seasoned trader, there's something valuable for everyone. Stay informed and ahead in your crypto journey. Register now on Bitrue and take the next step!

FAQ

What is AI jailbreaking?

It means trying to trick a chatbot like ChatGPT into saying something it’s not supposed to.

Who is Pliny the Prompter?

He’s a famous person on the internet who shows people how to jailbreak AI in safe and educational ways.

What is HackAPrompt?

It’s a contest where people try to find ways to trick chatbots to help improve AI safety.

How much money can you win?

The contest offers $500,000 in prizes, with up to $50,000 for some big challenges!

Is it safe and legal?

Yes, when done for learning and safety, it’s okay. But doing it to hurt people or break the law is wrong and illegal.

Disclaimer: The content of this article does not constitute financial or investment advice.

Register now to claim a 1012 USDT newcomer's gift package

Join Bitrue for exclusive rewards

Register Now
register

Recommended

What is xorgasmo.com? Does it have a Crypto Relation, and Scam, or legit?
What is xorgasmo.com? Does it have a Crypto Relation, and Scam, or legit?

Xorgasmo explained: What is it, is it safe, and does it use crypto? Uncover the truth behind the xorgasmo platform and its growing online attention.

2025-06-05Read