GPT-5's Router Vulnerability Lets Hackers Access Weaker, Less Safe Models
Adversa AI finds a flaw in GPT-5's routing. Read on to find out more.
By
Content Team
ON THIS PAGE
Want more insights like this?
Subscribe to our newsletter to get the latest software protection strategies delivered to your inbox.
By submitting your email, you consent to Codekeeper contacting you and agree to our privacy policy.
Researchers at Adversa AI discovered a major flaw in GPT-5's internal routing system that creates serious security risks. When users ask GPT-5 questions, an internal router decides which model actually responds – it might be GPT-5 Pro, but could equally be older versions like GPT-3.5 or GPT-4o.
Hackers can manipulate this router using specific trigger phrases, forcing queries to weaker, less secure models that are easier to jailbreak. This "PROMISQROUTE" vulnerability means GPT-5 is only as secure as its weakest predecessor.
While the routing saves costs and improves speed, it allows old jailbreaks to work again by targeting vulnerable older models instead of GPT-5's stronger safeguards.
Source: Security Week
Have questions about protecting your software?
Our escrow experts are standing by to help.
Book a free demo