Israeli Startup Noma Reveals Critical Security Gap in Salesforce’s AI Agents

29 September, 2025

The ForcedLeak vulnerability, rated CVSS 9.4, exposed how malicious prompts could trigger unauthorized data leaks from corporate CRM systems before Salesforce issued a fix

A newly discovered vulnerability in Salesforce’s autonomous AI agents could have allowed attackers to siphon sensitive data from corporate CRM systems. The flaw, dubbed ForcedLeak and rated a critical 9.4 on the CVSS severity scale, was disclosed by Israeli cybersecurity firm Noma Security and has since been patched by Salesforce.

The case highlights an emerging class of risks tied to autonomous agents: subtle manipulations by malicious actors that trick the AI into executing harmful actions. In this instance, those actions could have jeopardized the confidentiality and integrity of customer records stored in Salesforce systems.

Salesforce’s Agentforce platform was designed to accelerate customer management tasks: its AI agents parse leads submitted via online contact forms, generate marketing offers, respond to customers, and log interactions in the CRM. To perform these duties, they require deep access to business data — customer details, deal pipelines, and internal records. That same access makes them high-value targets when exploited.

The exploit was deceptively simple. An attacker submitted a regular web lead form but embedded a hidden instruction — a classic indirect prompt injection. The prompt asked the agent to “load an image” from an external address. In reality, the request included CRM data fields such as names, contact details, and deal notes, which were transmitted outwards. To ensure the system trusted the external resource, the attacker purchased a previously verified Salesforce domain for a few dollars. The agent, seeing it as a legitimate domain, treated the malicious request as part of routine lead processing.

When the AI agent processed the lead, it automatically executed the hidden instruction, sending corporate CRM data to a server controlled by the attacker — without any user clicks or human oversight.

The information at risk was not Salesforce’s own but that of its customers: companies relying on Agentforce to manage their lead pipelines. That included client contact details, marketing and sales data, and deal status reports. In short, any company using online lead forms connected to its CRM could have seen its sensitive information silently exfiltrated.

Security analysts warn that ForcedLeak represents a new type of AI-era threat: traditional monitoring and security tools struggle to detect hidden payloads executed autonomously by AI agents. The danger is no longer just a user clicking a malicious link, but attackers manipulating the agent itself. Noma’s disclosure underscores the urgent need for dedicated security rules for AI agents, such as stricter domain validation, permission controls, and limits on external resource loading.

Founded in 2023 by Niv Brown and Alon Tron, Noma Security has raised $132 million to date. The company develops risk-management and detection tools tailored for AI environments, aiming to help enterprises adopt smart technologies without sacrificing security or compliance.

[Pictured above: Noma’s founders. Credit: Omer HaCohen]

Share via Whatsapp

Posted in: AI , Cyber , News

Posted in tags: AI Agents , Noma Security , SalesForce