CX platforms course of billions of unstructured interactions a yr: Survey kinds, overview websites, social feeds, name heart transcripts, all flowing into AI engines that set off automated workflows touching payroll, CRM, and cost methods. No software in a safety operation heart chief’s stack inspects what a CX platform’s AI engine is ingesting, and attackers figured this out. They poison the information feeding it, and the AI does the harm for them.
The Salesloft/Drift breach in August 2025 proved precisely this. Attackers compromised Salesloft’s GitHub setting, stole Drift chatbot OAuth tokens, and accessed Salesforce environments throughout 700+ organizations, together with Cloudflare, Palo Alto Networks, and Zscaler. It then scanned stolen knowledge for AWS keys, Snowflake tokens, and plaintext passwords. And no malware was deployed.
That hole is wider than most safety leaders understand: 98% of organizations have an information loss prevention (DLP) program, however solely 6% have devoted sources, in line with Proofpoint’s 2025 Voice of the CISO report, which surveyed 1,600 CISOs throughout 16 nations. And 81% of interactive intrusions now use authentic entry relatively than malware, per CrowdStrike’s 2025 Risk Looking Report. Cloud intrusions surged 136% within the first half of 2025.
“Most safety groups nonetheless classify expertise administration platforms as ‘survey instruments,’ which sit in the identical threat tier as a mission administration app,” Assaf Keren, chief safety officer at Qualtrics and former CISO at PayPal, informed VentureBeat in a latest interview. “It is a large miscategorization. These platforms now hook up with HRIS, CRM, and compensation engines.” Qualtrics alone processes 3.5 billion interactions yearly, a determine the corporate says has doubled since 2023. Organizations can't afford to skip steps on enter integrity as soon as AI enters the workflow.
VentureBeat spent a number of weeks interviewing safety leaders working to shut this hole. Six management failures surfaced in each dialog.
Six blind spots between the safety stack and the AI engine
1. DLP can not see unstructured sentiment knowledge leaving by means of normal API calls
Most DLP insurance policies classify structured personally identifiable info (PII): names, emails, and cost knowledge. Open-text CX responses comprise wage complaints, well being disclosures, and govt criticism. None matches normal PII patterns. When a third-party AI software pulls that knowledge, the export appears to be like like a routine API name. The DLP by no means fires.
2. Zombie API tokens from completed campaigns are nonetheless reside
An instance: Marketing ran a CX marketing campaign six months in the past, and the marketing campaign ended. However the OAuth tokens connecting the CX platform to HRIS, CRM and cost methods have been by no means revoked. Meaning every one is a lateral motion path sitting open.
JPMorgan Chase CISO Patrick Opet flagged this threat in his April 2025 open letter, warning that SaaS integration fashions create “single-factor specific belief between methods” by means of tokens “inadequately secured … weak to theft and reuse.”
3. Public enter channels haven’t any bot mitigation earlier than knowledge reaches the AI engine
An online app firewall inspects HTTP payloads for an internet utility, however none of that protection extends to a Trustpilot overview, a Google Maps ranking, or an open-text survey response {that a} CX platform ingests as authentic enter. Fraudulent sentiment flooding these channels is invisible to perimeter controls. VentureBeat requested safety leaders and distributors whether or not anybody covers enter channel integrity for public-facing knowledge sources feeding CX AI engines; it seems that the class doesn’t exist but.
4. Lateral motion from a compromised CX platform runs by means of permitted API calls
“Adversaries aren’t breaking in, they’re logging in,” Daniel Bernard, chief enterprise officer at CrowdStrike, informed VentureBeat in an unique interview. “It’s a legitimate login. So from a third-party ISV perspective, you may have a sign-in web page, you may have two-factor authentication. What else would you like from us?”
The risk extends to human and non-human identities alike. Bernard described what follows: “Impulsively, terabytes of information are being exported out. It’s non-standard utilization. It’s going locations the place this consumer doesn’t go earlier than.” A safety info and occasion administration (SIEM) system sees the authentication succeed. It doesn’t see that behavioral shift. With out what Bernard known as "software program posture administration" masking CX platforms, the lateral motion runs by means of connections that the safety workforce already permitted.
5. Non-technical customers maintain admin privileges no person opinions
Advertising, HR and buyer success groups configure CX integrations as a result of they want pace, however the SOC workforce might by no means see them. Safety must be an enabler, Keren says, or groups route round it. Any group that can’t produce a present stock of each CX platform integration and the admin credentials behind them has shadow admin publicity.
6. Open-text suggestions hits the database earlier than PII will get masked
Worker surveys seize complaints about managers by identify, wage grievances and well being disclosures. Buyer suggestions is simply as uncovered: account particulars, buy historical past, service disputes. None of this hits a structured PII classifier as a result of it arrives as free textual content. If a breach exposes it, attackers get unmasked private info alongside the lateral motion path.
No person owns this hole
These six failures share a root trigger: SaaS safety posture administration has matured for Salesforce, ServiceNow, and different enterprise platforms. CX platforms by no means acquired the identical therapy. No person screens consumer exercise, permissions or configurations inside an expertise administration platform, and coverage enforcement on AI workflows processing that knowledge doesn’t exist. When bot-driven enter or anomalous knowledge exports hit the CX utility layer, nothing detects them.
Safety groups are responding with what they’ve. Some are extending SSPM instruments to cowl CX platform configurations and permissions. API safety gateways provide one other path, inspecting token scopes and knowledge flows between CX platforms and downstream methods. Identification-centric groups are making use of CASB-style entry controls to CX admin accounts.
None of these approaches delivers what CX-layer safety really requires: steady monitoring of who’s accessing expertise knowledge, real-time visibility into misconfigurations earlier than they change into lateral motion paths, and automatic safety that enforces coverage with out ready for a quarterly overview cycle.
The primary integration purpose-built for that hole connects posture administration on to the CX layer, giving safety groups the identical protection over program exercise, configurations, and knowledge entry that they already anticipate for Salesforce or ServiceNow. CrowdStrike's Falcon Defend and the Qualtrics XM Platform are the pairing behind it. Safety leaders VentureBeat interviewed stated that is the management they’ve been constructing manually — and dropping sleep over.
The blast radius safety groups aren’t measuring
Most organizations have mapped the technical blast radius. “However not the enterprise blast radius,” Keren stated. When an AI engine triggers a compensation adjustment based mostly on poisoned knowledge, the harm shouldn’t be a safety incident. It’s a flawed enterprise resolution executed at machine pace. That hole sits between the CISO, the CIO and the enterprise unit proprietor. In the present day nobody owns it.
“Once we use knowledge to make enterprise choices, that knowledge have to be proper,” Keren stated.
Run the audit, and begin with the zombie tokens. That’s the place Drift-scale breaches start. Begin with a 30-day validation window. The AI won’t wait.

