ref/AgenticSwimlanes.com
menu
PROCESS / ENGINEERINGLast verified: April 2026

AI Agent Engineering Task Flow: Cited Swimlane (2026)

Coding agents have evolved from autocomplete to whole-task execution: take an issue, produce a pull request, iterate on test failures, and submit for human review. The agent loop is non-deterministic; the process around it (issue intake, CI, code review, merge) is BPMN-deterministic. The canonical pattern wraps the agent loop as a bpmn:callActivity and keeps the wrapper process correct.

ENGINEERINGENGINEER (HUMAN)AI CODING AGENTREPOSITORY / CIIssueBrief and scopeAgent loop(read, write,run tests)bpmn:callActivityOpen pullrequestCI runs tests×green?yesHuman reviewApprove andmergeno
Engineering task flow. Three lanes: human engineer (brief, review, merge), AI coding agent (the wrapped loop), repository / CI (tests, activity log). The agent loop is a bpmn:callActivity with a thicker border per BPMN 2.0 visual convention. CI failure routes back into the loop; CI success routes to human review.Source: pattern observed in Cognition Devin engineering blog, cognition.ai/blog (accessed April 2026); Anthropic Claude Code documentation, docs.claude.com / claude-code (accessed April 2026); GitHub Copilot Workspace technical preview documentation (accessed April 2026). Shape conformance per OMG BPMN 2.0 §10.3, §10.4.4 (call activity), §10.6.

One named case study

Cognition's Devin (released in technical preview March 2024, generally available 2024 to 2025) is the most-cited engineering coding agent. The Cognition engineering blog publishes a series of posts on the loop semantics, the planning step, the IDE-as-tool integration, and the long-running task model. Anthropic's Claude Code (released October 2024) is the second canonical example; the Claude Code documentation describes the same wrapper process (read codebase, propose change, run tests, request human review) with explicit support for the human-review-on-PR step. GitHub Copilot Workspace (technical preview 2024 to 2025) is the third example with the same shape.

All three deployments place the human review at the pull request, not inside the loop. The loop is allowed to iterate freely on test failures; the human reviews only the final candidate. The pattern matches Anthropic's “Building Effective Agents” (Schluntz 2024) workflow-vs-agent distinction: the agent loop is a service step inside a workflow, not a workflow itself.

Where the human gates sit

Two gates. The first is the brief (human framing of the task before the agent starts); the second is the PR review before merge. The PR review is the strict gate; merge into main is the irreversible action and warrants human sign-off regardless of CI green. Both Devin and Claude Code expose this gate explicitly in their documentation.

Where the handoffs sit

Three handoffs. Human-to-agent at the brief (sequence flow into the agent lane). Agent-to-CI at PR creation (sequence flow into the system lane). CI-to-human or back-to-agent at the test result (gateway-routed sequence flow). All within one pool because the engineer, the agent, and the CI are inside the same organisation; an external CI provider would push the CI step into a separate pool with a message flow.

Workforce-impact note

Coding agents have not (as of April 2026) replaced engineering roles at measurable scale; published deployments report task-time savings on well-bounded tickets and increased throughput per engineer. The rep-as-reviewer pattern shifts senior engineering time from writing to reading. For the methodology, see aijobimpactcalculator.com.

Related pages