Skip to content
Boxed.ai

Agentic-AI controls for financial services SMEs

Put your AI agents in a box — before the regulator does.

Boxed.ai sits between your AI agents and the tools they touch — CRM, email, payments, files — and enforces least-privilege permissions, human approval gates, kill-switches and tamper-evident audit logs. Built for the controls your second line and external auditors actually look for.

Sits at
The execution boundary
Built for
FS firms 5–500 staff
Aligned with
FCA, ISO 42001, EU AI Act

The problem

Agents have moved from bad text to bad actions.

The risk surface in financial services is no longer hallucinated answers. It is unauthorised actions taken on the firm's behalf by software the firm cannot fully see.

Indirect prompt injection

Hostile instructions hidden inside emails, PDFs and CRM notes get silently followed by an agent. NCSC has confirmed this cannot be fully patched at the model layer.

Over-permissioned tools

A coding agent with write access to production. A sales agent that can send emails on behalf of the firm. Excessive agency turns small mistakes into reportable incidents.

Connector-driven leaks

An agent reading a client mailbox forwards data through a chained tool call. Without an execution-time control plane, you discover it from the client.

Tool poisoning and supply chain

A third-party MCP plug-in or tool description is modified upstream. The agent now does something the policy never approved.

Opaque behaviour

No tamper-evident record linking prompt → retrieved context → tool call → approver → outcome. Internal audit cannot evidence what happened, when, or why.

Five failure modes. One control plane to address them.

The control plane

Three layers of defence, mapped to the controls your auditors already understand.

Constrain

Permissions, gates and kill-switches

Capability-scoped tool access. Human approval required for the actions that matter. A single switch that halts every agent in scope, instantly.

  • Least-privilege tool wrappers
  • Step-up approvals on sensitive actions
  • Per-agent and global kill-switch

Withstand

Tamper-evident logs and observability

Every prompt, retrieved context item, tool call, parameter, approver identity, outcome and cost is captured in an append-only log built for evidence.

  • Hash-chained audit log
  • End-to-end agent traces
  • Exception MI for second-line review

Prevent

Pre-deployment risk and policy templates

Risk score new agents before they go live. Apply policy templates aligned to FCA Consumer Duty, SM&CR and ISO 42001 — without writing them from scratch.

  • Policy templates for FS firms
  • Risk scoring on connectors
  • Content sanitisation for inputs

Why financial services SMEs

Enterprise-grade AI controls, sized for firms that don't have an enterprise budget.

Wealth managers, IFAs, brokers, fintech lenders and accountancy practices are deploying AI agents into client mail, advice workflows and back-office tools. The regulatory bar is the same as the largest banks. The internal capacity to engineer, audit and govern those agents is not.

Boxed.ai gives smaller firms the same execution-boundary controls and audit evidence that a tier-one bank would build in-house — as a service, with policy templates that are already mapped to the rules you already report against.

Explore by firm type

A working example

Email-triage agent at a 40-person IFA

An LLM agent reads inbound client mail, drafts replies and proposes next actions. Without a control layer the agent can send mail, attach files, and act on instructions hidden in client signatures.

CONSTRAIN Email tool exposed in draft-only mode. Send requires named approver.
WITHSTAND Every draft, prompt and retrieved attachment hash-chained into the audit log.
PREVENT Inbound content sanitised; injection patterns flagged before reaching the model.

Standards alignment

Built around the frameworks your board, regulator and auditor already use.

  • FCA Consumer Duty

    Demonstrate good outcomes and avoid foreseeable harm from AI-driven decisions.

  • SM&CR

    Evidence senior-manager oversight of AI deployments and accountable decisions.

  • PS21/3 Operational Resilience

    Identify important business services exposed to agentic AI and bound the impact.

  • ISO/IEC 42001

    Operate an AI management system with the documented controls auditors expect.

  • NIST AI RMF 1.0

    Map, measure, manage and govern AI risk with consistent artefacts.

  • EU AI Act

    Meet GPAI obligations and prepare for broad application from August 2026.

Boxed.ai supports alignment with these frameworks. It is not itself certified to them.

Show your auditor what your agents did, before they ask.

Thirty minutes. We'll walk you through a working email-triage agent inside Boxed.ai and the evidence pack it produces.