Security researchers at Oasis have identified three high-risk vulnerabilities in Claude.ai that combine to create a full attack chain known as “Cloudy Day.” This chain allows attackers to deliver targeted exploits and extract sensitive user data without detection. One vulnerability has been patched, while fixes for the remaining two are in progress.
The Cloudy Day Attack Chain
The attack begins with invisible prompt injection through URL parameters on Claude.ai. Users can start a new chat with a pre-filled prompt using the format claude.ai/new?q=…, and attackers exploit this by embedding HTML tags to insert hidden malicious prompts. These prompts activate when the user presses Enter.
Prompt Injection and Data Exfiltration
Although Claude’s code execution sandbox blocks outbound network connections to third-party servers, it permits access to api.anthropic.com. Attackers can embed their own API key in the prompt, instructing Claude to scan the victim’s past conversations for sensitive information, compile it into a file, and upload it to the attacker’s Anthropic account via the Files API.
“No integrations or external tools needed, just capabilities that ship out of the box,” the researchers noted.
Open Redirects Amplify the Threat
To lure victims, attackers leverage open redirects on claude.com. URLs formatted as claude.com/redirect/ forward users to any domain without validation. This flaw pairs dangerously with Google Ads, which only checks hostnames, enabling attackers to craft deceptive ads that lead to malicious links.
Response and Patches
Anthropic has addressed the prompt injection issue. The Oasis team confirmed that the company is developing patches for the data exfiltration and open redirect vulnerabilities.

