ref/AgenticSwimlanes.com
menu
PROCESS / CUSTOMER SUPPORTLast verified: April 2026

AI Agent Customer Support Intake Workflow: Cited Swimlane (2026)

Customer support intake is the most-deployed process for AI agents at consumer scale. The canonical pattern is a tier-1 classifier and replier handling routine queries, with a strict escalation path for ambiguous, high-stakes, or out-of-scope cases. The constraint that shapes the process is volume: tier-1 must handle the long tail at near-zero marginal cost, and the human queue must absorb only the residual.

CUSTOMERMessageSubmit queryCUSTOMER SERVICEAI AGENT (TIER 1)HUMAN (TIER 2)CRMInboundClassify intent×confidence?Auto-resolve andreplyhighEscalatelowCaughtResolve andreplyUpdateconversation logreply
Customer support intake. Customer pool sends a message flow into the service pool. Inside the service pool, the agent lane carries the classifier and the replier as bpmn:serviceTask steps. The exclusive gateway routes high-confidence cases to auto-resolution; low-confidence cases throw a bpmn:signalEvent caught in the human lane. The CRM lane carries the persistence step.Source: pattern observed in Klarna AI assistant operator note (klarna.com / klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month, 27 February 2024, accessed April 2026). Shape conformance per OMG BPMN 2.0 §10.3, §10.5, §10.6, §10.7.

One named case study

Klarna's AI assistant launched at scale in February 2024. The operator note published on the Klarna press site reports that the assistant handled roughly two-thirds of customer service chats in its first month, equivalent to the workload of about 700 full-time agents. The note details the deployment as covering 23 markets and 35 languages, with the OpenAI partnership as the underlying model provider.

The same note is explicit about the carve-out: the assistant is permitted to handle refunds and returns within defined limits, and is explicitly not permitted to act on certain payment-related queries without human escalation. That carve-out is the cost-of-error gate from the human vs agent decision rubric, in concrete form. Subsequent reporting in 2024 and 2025 (covered in industry press) added detail on the escalation envelope and the human headcount that backstops the queue.

Source: klarna.com / klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month (Klarna press release, 27 February 2024, accessed April 2026).

Where the human gates sit

Two gates appear in the diagram. The first is the bpmn:exclusiveGateway after the classifier, routing cases by model confidence. The second is the payment-action gate (not drawn explicitly in the simplified diagram above) which throws an unconditional escalation regardless of confidence. The first is a soft gate; the second is the regulatory-and-cost-of-error gate from the rubric.

Where the handoffs sit

The escalation is the canonical bpmn:signalEvent handoff documented on the handoffs page. The reply from the agent back to the customer is a bpmn:messageFlow across the customer pool boundary. There is no agent-to-agent handoff in this simplified intake; in production, intent-specific sub-agents add an internal A2A handoff before the response is composed.

Workforce-impact note

The Klarna note frames the throughput as equivalent to the workload of 700 full-time agents. Whole-role replacement claims at this scale belong on a calculator built for the question; aijobimpactcalculator.com covers the defensible methodology (task-level, not role-level).

Related pages