OpenAI Launches $25,000 Bug Bounty for GPT-5.5 Jailbreak Attempts
OpenAI offers $25,000 to anyone who can jailbreak its latest model GPT-5.5
The Economic TimesImage: The Economic Times
OpenAI has initiated a Bio Bug Bounty program offering $25,000 to security researchers who can successfully jailbreak its latest AI model, GPT-5.5. This initiative aims to enhance AI safety by inviting external experts to identify vulnerabilities in the model's biological safety features.
- 01OpenAI's Bio Bug Bounty program offers $25,000 for a complete jailbreak of GPT-5.5.
- 02The challenge requires a single prompt to bypass biosafety guardrails without prior context.
- 03Applications for the program close on June 22, 2026, with testing running from April 28 to July 27.
- 04Participation is limited to vetted biosecurity experts and those with relevant AI security experience.
- 05The initiative reflects a growing trend in AI safety testing through external expertise.
Advertisement
In-Article Ad
OpenAI has launched a Bio Bug Bounty program aimed at enhancing the safety of its latest AI model, GPT-5.5. The program, which opened for applications on April 23, 2026, offers a $25,000 reward to the first researcher who can successfully create a universal jailbreak prompt that allows the model to answer five specific biosafety questions without triggering moderation responses. This initiative is significant as it marks one of the first instances where a major AI company is actively seeking external expertise to stress-test its systems. The challenge requires participants to start from a clean chat session, ensuring no prior context influences the model's responses. Applications will be accepted until June 22, 2026, and testing will occur from April 28 to July 27. Only a select group of vetted biosecurity experts and experienced researchers will be invited to participate. All findings will be subject to a non-disclosure agreement, preventing public disclosure of results, which is standard in security research. This move aligns with a broader trend in the AI industry towards structured adversarial testing, enhancing safety protocols.
Advertisement
In-Article Ad
Advertisement
In-Article Ad
Reader Poll
How important is AI safety to you when using language models?
Connecting to poll...
More about OpenAI
US-China Rivalry Intensifies Over Artificial Intelligence Control
The Economic Times β’ Apr 24, 2026
Record IPO Wave Looms with SpaceX, OpenAI, and Anthropic Eyeing Market Debuts
The Economic Times β’ Apr 24, 2026

OpenAI Launches GPT-5.5: A Leap in AI Performance and Real-World Applications
The Indian Express β’ Apr 24, 2026
Read the original article
Visit the source for the complete story.

