Tech

Conducting DPIAs for Agentforce in UK policing

Photo of Shaha Alam - AI and Agentic Solutions Lead Written by Shaha Alam - AI and Agentic Solutions Lead,   Feb 2, 2026

Introducing AI-driven solutions like Agentforce into UK policing is not a decision to take lightly. These powerful technologies have the potential to completely transform the way forces operate, from back-office efficiency through to frontline support. But with this power also comes new risks, particularly in how sensitive personal data is processed, stored, and interpreted. While these risks are real, the potential to deliver faster, fairer, and more effective policing outcomes makes their careful navigation worthwhile.

In an environment governed by high standards of public accountability, ethical scrutiny, and strict legal obligations under the UK GDPR, Data Protection Act 2018, and Law Enforcement Processing provisions, a Data Protection Impact Assessment (DPIA) is not just a formality - it's essential. It provides a defensible framework for making informed decisions, protecting officers and the public alike.

Before you dive in, I want to start by prefacing that this article does not seek to provide easy answers. Instead, it aims to highlight the key questions that professionals - whether in IT, governance, or operational delivery - should be asking when considering Agentforce, AI and Agentic solutions in the policing context.

An intro to Agentforce

For those who have not heard of Salesforce’s latest innovation in AI, this breakdown is for you!

Agentforce refers broadly to a class of solutions that use AI, particularly large language models (LLMs), to create digital assistants that can understand what people ask and help them take action. These agents might retrieve information, summarise content, support workflows, or assist human decision-making across both the Salesforce ecosystem as well as or custom policing environments. A few examples include:

  • A service agent that can initiate password resets, freeing up officers from the routine Monday-morning flurry of requests.

  • A knowledge agent that can engage in a conversational dialogue with the public, helping them find information on a website more easily and reducing calls to the service desk.

  • An officer-support agent that uses established crime counting and reporting guidelines to assist with crime classification, helping officers pursue appropriate lines of inquiry and record information accurately.

While the potential benefits are clear - improved speed, reduced manual burden, better access to information - so too are the risks. In policing, the data involved is often highly sensitive, and the stakes, from evidential integrity to public trust, are significant.

The DPIA: more than a compliance exercise

Many data protection frameworks require organisations to assess the risks of high-risk data use before a system is introduced. In the UK, a Data Protection Impact Assessment, or DPIA, is the mechanism used to consider how a project might affect people’s personal data, identify potential risks, and put safeguards in place early.

Under Article 35 of the UK GDPR, a DPIA is required where processing is likely to create a high risk to individuals’ rights and freedoms, particularly when new technology is involved. For law enforcement agencies, that threshold is often met.

In a policing context, a DPIA goes beyond compliance. It acts as a practical tool for anticipating risk, reducing harm, and providing assurance to officers, leaders, and the public. It should be embedded in early design and procurement activity, rather than treated as a retrospective check.

Attention should extend beyond what a system can do to what it ought to do, and the safeguards that must sit around it.

The critical questions you should be asking

Now that you’re up to speed, below are key thematic areas that should structure any DPIA for an Agentforce-style solution in policing. These are not exhaustive, but they reflect the most critical concerns in today’s governance environment.

1. Purpose and necessity

  • What is the specific operational need this AI agent is intended to meet?
  • Is the use of Agentforce proportionate to the problem?
  • Could the same outcome be achieved through less invasive means?
  • Is this deployment aligned with a defined public task or policing priority?

Without clear necessity and proportionality, the legal basis for processing may not stand up to scrutiny. Ambiguous or vague purposes create risk and invite mission creep.

Think in terms of ROI, reduction in cost, reduction in time, and compliance with auditing requirements.

2. Nature and sensitivity of the data

  • What categories of data will be processed (e.g. personal, special category, criminal offense data)?
  • Will the agent have access to operational policing systems (e.g. Records Management Systems (RMS) such as Niche, Athena, HCLTech PROTECT), Police National Computer system (PNC), custody records)?
  • Will any unstructured data (e.g. officer notes, body-worn video transcripts) be included?
  • How is data minimised and controlled at source?

The nature of the data dictates the level of risk. Any engagement with criminal offence data triggers enhanced obligations under Part 3 of the DPA 2018.

3. Lawful basis and compliance with Part 3 (Law Enforcement Processing)

  • What is the clear lawful basis for processing under the Law Enforcement provisions?
  • Does this processing meet the requirement of being strictly necessary for law enforcement purposes?
  • Has the solution been assessed against internal and national governance standards (e.g. Information Commissioner’s Office (ICO) guidance, National Police Chiefs’ Council (NPCC) digital strategy)?
  • Have senior information risk owners (SIROs) or data protection officers (DPOs) been involved early?

Policing data cannot be processed under general GDPR bases - Part 3 requirements are specific, and failure to meet them can invalidate the entire deployment.

4. Transparency and explainability

  • Can the system’s outputs be explained and justified to internal users and external stakeholders?
  • What mechanisms exist to allow officers to challenge or override AI-generated content?
  • If public-facing, how is the use of AI made visible to individuals?
  • Are audit logs in place for accountability and traceability?

In law enforcement, decisions can have life-changing consequences. Systems must be explainable - not only to users, but in courtrooms, oversight bodies, and to the public.

5. Human oversight and governance controls

  • Are human users involved in all decisions with legal or operational significance?
  • What monitoring processes are in place to detect and respond to errors, bias, or degradation in performance?
  • Who is responsible for oversight of the AI agent’s behaviour and decision-making boundaries?
  • How are edge cases, misuse, or unintended outputs handled?

AI systems should not operate without meaningful human oversight. Governance must be clear, consistent, and backed by accountability at every level.

6. Deployment context and use case risk assessment

It’s important to note that not all Agentforce deployments carry the same risk. For example:

  • A public-facing AI assistant answering general enquiries about police procedures = low to moderate risk
  • An internal summarisation agent assisting with report writing = moderate risk
  • An AI agent querying operational databases or case files to assist in triage = high risk
  • An AI tool supporting charging decisions or risk assessments = very high risk

So based on this, you can ask:

  • What is the consequence if the AI is wrong?
  • Is this a pilot, and if so, how are risks contained?
  • What data will be used for training, and is it representative?
  • Are officers or staff adequately trained in how to use and interpret outputs?

The context of use defines the safeguards required. Pilots should not be treated as risk-free. Even limited trials can cause harm if not properly governed.

7. Data handling, location, and retention

One of the most overlooked, but vital, areas of the DPIA is how data is stored, where it is processed, and for how long.

  • Where will the data be processed geographically?
    • Will any data be transferred outside the UK or EEA?
    • Are any third-party providers or cloud services involved? If so, do they have appropriate UK-approved transfer mechanisms in place (e.g. IDTA - International Data Transfer Agreement (IDTA) or Standard Contractual Clauses (SCC)?
    • Are there any risks related to jurisdiction (e.g. foreign access requests or data sovereignty laws)?
  • Who has access to the data?
    • Are processors or sub-processors involved?
    • Have proper contractual controls and DPAs been signed?
  • What are the data retention and deletion policies?
    • Will the AI model retain user data (e.g. for training or optimisation)?
    • Is the data cached, stored, or logged? For how long?
    • Are deletion schedules aligned with policing retention requirements under PIRM (Code of Practice for Policing Information and Records Management in 2023) or local retention schedules?
  • Is personal data used to train or fine-tune the model?
    • If yes, has the appropriate consent or legal basis been secured?
    • If no, how is this technically and contractually enforced?

Data sovereignty, transfer risks, and indefinite retention are all major concerns for policing. These questions are non-negotiable when evaluating external AI vendors or cloud-based deployments.

8. Use case risk and contextual sensitivity

Different use cases carry different levels of inherent risk. Some common examples:

  • Low-risk: Agentforce answering internal FAQs or policy documents
  • Medium-risk: AI summarising internal reports or meeting minutes
  • High-risk: Triage support for crime reports, identifying case priority
  • Very high-risk: Suggesting arrest decisions, risk ratings, or charging guidance

I suggest you ask:

  • What would be the impact of an error or failure?
  • How visible are the AI’s outputs to operational users?
  • Are sufficient mitigations in place for each risk level?

Context defines risk. A deployment in the wrong setting, even if technically sound, can cause disproportionate harm.

Enabling safe and responsible innovation

Deploying Agentforce or similar AI solutions within policing has clear potential. But that potential can only be realised safely if rigorous, structured, and well-informed DPIAs are completed at the right time -early, thoroughly, and with full organisational engagement.

DPIAs should not be viewed as bureaucratic hurdles. They are central to building trust, maintaining legal compliance, and ensuring that innovation in policing serves the public interest -without compromising rights or safety.

The questions above won’t give you the answer, but they will guide you to ask the right things, before harm occurs, not after.

Moving forward with confidence

If you're currently assessing a use case for Agentforce or another AI deployment in your force, consider convening a cross-functional working group early - including DPOs, legal advisors, operational leads, and technical SMEs. The cost of early scrutiny is far lower than the cost of reactive remediation.

Our team brings a rare combination of deep technical expertise in Agentforce, automation, and integration, alongside a proven track record in policing and public sector deployments. And having worked with a number of UK police forces, we understand both the promise, and the challenges, of Agentforce in policing. So please don’t hesitate to get in touch and we can help you form the right questions, as well as provide the answers you need.

We’d love to hear your opinion on this post