Tech

Agentforce - security breach explained

Photo of Santosh Sundar Written by Santosh Sundar,   Oct 3, 2025

I’m sure we are all aware that AI is advancing at an unprecedented rate. With every breakthrough comes both a new set of opportunities and a group of risks. While the potential of AI is huge, the reality is clear: without security and compliance at its core, innovation can quickly turn into exposure.

A recent discovery of a vulnerability in Salesforce's Agentforce is a timely reminder that even the most established platforms aren't invincible. And whilst Salesforce quickly resolved the issue, it raises important questions about how we manage AI responsibly, protect sensitive data, and keep trust at the heart of every solution.

What happened with Agentforce

At the end of last week, a security incident was made public relating to Agentforce. A third party discovered a flaw called ForcedLeak that could have allowed outsiders to extract sensitive CRM data. In simple terms, it worked by sneaking harmful instructions into the Agentforce’s prompts, tricking the agent into sharing information it shouldn’t. Often referred to as indirect prompt injection, the attacker managed to inject hidden instructions into Salesforce’s Web-to-Lead forms, which the agent later executed and responded to with CRM data.

Here's how it played out:

  • An attacker submitted a Web-to-Lead form with hidden instructions disguised in the description field, which can hold up to 42,000 characters.
  • Salesforce CRM processed the form as usual, creating a lead record that now contained the malicious code.
  • When the Agentforce agent was later asked about that lead, it unknowingly followed the hidden instructions.
  • This triggered a malicious URL that was part of the hidden instructions, a process known as indirect prompt injection.
  • The URL had once been whitelisted by Salesforce but had since expired. The attacker later bought it, turning it into a backdoor.

Screenshot of a dashboard showing the vulnerability on Salesforce AgentforceImage sourced from Noma

Forced Leak was possible due to a flaw in Agentforce. To understand how the flaw was exploited, we need to know about prompt engineering. It’s a process in designing and building AI agents that can handle customer queries securely and respond in a grounded manner, without sharing any unintended information. The instructions are written into the Agents, but in this case, the hidden instructions embedded in a field proved to be the vulnerability. Salesforce has built a capable prompt defence mechanism, called guardrails, to prevent exactly this kind of misuse. Yet the flaw that was exposed shows how even trusted platforms are not immune. Two factors made this attack possible: the ability to overload a descriptive field with vast amounts of instructions, and agents being able to access URLs that were previously whitelisted but no longer actively monitored.

Even though the vulnerability was short-lived, the incident highlights why robust security must be at the forefront for any organisation adopting AI technologies.

Sound familiar?

The mode of attack probably sounds quite familiar to anyone with a background in delivering services online, and has worked on hardening systems against CSS/XSS (cross-site scripting) or other code-injection attacks. And that’s because the technique, at its heart, is a tried and tested method for exposing a system:

  1. Trick the target into trusting the information you’ve given them
  2. Wait for them to use the information to exploit their vulnerabilities

For attack monitoring systems, detecting a breach is incredibly difficult, as there’s often no direct connection between the two. And for the attack to be successful, the attacker must have some knowledge of the victim’s systems - in this case, knowledge of a trusted legacy domain name.

So how can we prevent these attacks? Well, it means that AI and Agentic systems must be treated with the same proactive defensive techniques that any other system should have, i.e., checking your inputs, not mixing data and instructions, auditing legacy configurations, etc.

We must remain ever vigilant.

Salesforce's response

What really matters in a situation like this is how quickly the incident is responded to... and as expected, Salesforce acted fast!

They rolled out an update that enforces trusted URLs for both Agentforce and Einstein AI. In practice, this means agents can only access URLs that are explicitly approved in an allowlist, thereby blocking any unverified URLs.

They also locked down expired domains that attackers might attempt to repurchase and restricted Agent outputs to only connect to trusted, approved endpoints. Together, these changes closed the loophole and reinforced Salesforce’s wider security framework.

For business leaders, there’s a key lesson here. Even with strong safeguards, new risks will always emerge. The organisations that stay protected are those that have the right governance and processes in place to respond quickly, adapt rapidly, and maintain trust in their AI systems.

Why AI security and governance matters

From our perspective, the incident highlights a broader challenge: as AI becomes increasingly integrated into business processes, the associated risks also grow. Think about the kind of information AI agents can access - customer records, personal details, intellectual property, even sensitive patient data. This goes far beyond the security challenges businesses have managed over the last two or three decades. With AI, the attack surface has expanded, and organisations need to revisit long-standing protocols to make sure they’re ready for this new reality.

Unlike traditional chatbots that follow a rigid script, AI agents are dynamic and autonomous, which makes them powerful, and as we all know, with great power comes great responsibility. That’s why governance and compliance aren’t optional extras; they’re the foundation for trust. Customers, partners, and regulators will all expect confidence that these systems are safe, accountable, and properly monitored.

At Infomentum, we believe the key lies in a secure-by-design approach to AI adoption. This means:

  • Embedding governance from the outset
  • Performing regular audits, and
  • Maintaining continuous monitoring with timely patching.

In addition, we support organisations by designing tailored AI governance frameworks, delivering training after deployment, and providing managed services to ensure the safe and effective adoption of Agentforce. Just as importantly, we guide leaders to ask the critical question: Are we ready to manage AI securely and responsibly? By balancing innovation with rigorous compliance, we enable organisations to harness the full potential of AI without compromising trust.

Conclusion

Embedding AI across an organisation isn’t just about unlocking new opportunities; it’s about doing it in a way that’s safe, sustainable, and trustworthy. Security and governance aren’t box-ticking exercises; they’re ongoing practices that determine whether AI strengthens an organisation or exposes it to new risks. With the right safeguards and oversight in place, businesses can protect sensitive data, inspire confidence, and scale innovation with clarity.

We work with organisations to achieve that balance, helping them adopt AI with confidence, combining innovation with the assurance of strong governance from day one. If you'd like to speak with one of our experts, please don't hesitate to get in touch.

We’d love to hear your opinion on this post