Agent Patterns

From augmented LLMs to autonomous agents — Anthropic's taxonomy of agentic system patterns, building blocks, and when to use each

The Augmented LLM

Every agentic system starts from the same building block: an LLM enhanced with retrieval, tools, and memory. The model actively generates its own search queries, selects appropriate tools, and determines what information to retain. This augmented LLM is the atom from which all patterns are composed.

Agent Patterns: From Workflows to Autonomous Agents Augmented LLM Model + Retrieval + Tools + Memory Retrieval Tools Memory Workflow Patterns 1. Prompt Chaining LLM → gate → LLM → gate → LLM Sequential with validation 2. Routing Input → classifier → handler Classify then specialize 3. Parallelization Sectioning | Voting Concurrent then aggregate 4. Orchestrator–Workers Dynamic decomposition + delegation Subtasks determined at runtime 5. Evaluator–Optimizer Generator ⇄ Evaluator loop Iterate until quality met Autonomous Agent Autonomous Agent Loop observe → decide → execute → evaluate → (repeat until done) Model controls what to do next and when to stop Source: Anthropic — "Building Effective Agents" (Dec 2024)


Workflows vs. Agents

Anthropic draws a clear distinction between two categories of agentic systems:

DimensionWorkflowsAgents
Control flowPredefined code paths orchestrate LLM callsLLM dynamically directs its own process and tool usage
PredictabilityHigh — same input follows same pathVariable — model decides next steps based on context
Best forWell-defined tasks with consistent structureOpen-ended problems with unpredictable step counts
Trade-offLess flexible, more reliableMore capable, higher cost, potential compounding errors

The key insight: don’t reach for agents when a workflow suffices. Workflows offer predictability and consistency for well-defined tasks. Agents are the better option when flexibility and model-driven decision-making are needed at scale.


Five Workflow Patterns

1. Prompt Chaining

Decompose a task into sequential steps. Each LLM call processes the output of the previous one, with optional programmatic validation gates between steps.

Structure: LLM → gate → LLM → gate → LLM

When to use: Tasks decomposable into fixed subtasks where you trade latency for higher accuracy.

Example: Generate marketing copy → validate tone → translate to target language.

2. Routing

Classify the input first, then direct it to a specialized handler. Each handler can have its own prompt, tools, and even model.

Structure: Input → classifier → specialized handler A | B | C

When to use: Complex tasks with distinct categories requiring different handling.

Example: Customer service — route general inquiries, refund requests, and technical issues to different specialized prompts.

3. Parallelization

Run multiple LLM calls simultaneously, then aggregate. Two variants:

  • Sectioning — split independent subtasks across parallel calls
  • Voting — run the same task multiple times for diverse perspectives

When to use: Speed through division, or higher confidence through multiple perspectives.

Example: Code review — one call checks for security vulnerabilities, another for performance, a third for style.

4. Orchestrator–Workers

A central LLM decomposes the task dynamically, delegates subtasks to worker LLMs, then synthesizes results. Unlike parallelization, the subtasks are not predefined — the orchestrator determines them based on the specific input.

Structure: Input → orchestrator → [worker₁, worker₂, ...workerₙ] → orchestrator → output

When to use: Complex tasks where the required subtasks can’t be predicted in advance.

Example: Multi-file code changes where the orchestrator identifies which files need modification and delegates each to a worker.

5. Evaluator–Optimizer

One LLM generates; another evaluates and provides feedback. The loop continues until quality criteria are met.

Structure: Generator ⇄ Evaluator (loop until pass)

When to use: Clear evaluation criteria exist, and iterative refinement yields measurable improvement.

Example: Literary translation — generator produces translation, evaluator checks for nuance and cultural accuracy, loop refines until both are satisfied.


The Autonomous Agent

When none of the structured workflow patterns fit — when the problem is open-ended, the number of steps is unpredictable, and no hardcoded path will work — you need an autonomous agent.

An autonomous agent is fundamentally simple:

while not done:
    observe environment (tool results, user feedback)
    decide next action
    execute action via tools
    evaluate result

The model maintains control over what to do next and when to stop. It gains ground truth from tool results and code execution at each step, using that feedback to plan subsequent actions.

Key considerations:

  • Higher cost — open-ended loops consume more tokens than fixed workflows
  • Compounding errors — each step’s mistakes propagate to subsequent steps
  • Guardrails required — sandbox execution, permission boundaries, and human-in-the-loop checkpoints

Tool Design: The Agent-Computer Interface

Tools are the agent’s hands. Poor tool design is one of the most common causes of agent failure — not because the model is insufficiently intelligent, but because the interface is ambiguous.

Design principles

  1. Self-contained descriptions — each tool’s docstring should explain exactly when and how to use it, with edge cases and input format requirements
  2. Minimal overlap — if a human engineer can’t definitively say which tool to use in a given situation, neither can the model
  3. Absolute over relative — concrete identifiers (absolute file paths) outperform relative references
  4. Think before commit — provide “thinking” tokens so the model can reason before making irreversible tool calls
  5. Format follows training — keep tool input/output formats close to what appears naturally in training data

“Put yourself in the model’s shoes. Is it obvious how to use this tool, based on the description and parameters alone, or would you need to think carefully about it?”


When NOT to Build an Agent

The most important pattern is knowing when not to use one:

  1. Start with a single, optimized prompt
  2. Add retrieval and in-context examples
  3. Add tools for external interaction
  4. Introduce workflow patterns only when single-call approaches hit their limits
  5. Deploy autonomous agents only when workflows can’t handle the flexibility required

“Success in the LLM space isn’t about building the most sophisticated system. It’s about building the right system for your needs.”

Each layer of complexity adds cost, latency, and failure modes. The right harness is the simplest one that solves the problem.


Source

Building Effective Agents — Anthropic, December 2024.

Was this page helpful?