7 minute read

“An Agent without a Plan is just a stochastic parrot reacting to noise.”

TL;DR

Sequential ReAct agents waste time executing independent subtasks one after another. Dependency graphs model agent plans as DAGs where independent tasks run in parallel and synchronize at join points – turning a 30-second sequential workflow into a 10-second parallel one. The Plan-and-Solve pattern separates reasoning (LLM generates the DAG) from execution (runtime processes it via topological sort). LangGraph makes this practical with typed state objects and checkpoint persistence. Watch out for hallucinated cycles in LLM-generated plans and context overflow when merging too many parallel results. For the fundamentals of planning architectures, see Planning and Decomposition, and for a hands-on LangGraph walkthrough, see LangGraph Deep Dive.

A printed circuit board trace layout photographed from above under inspection light

1. Introduction

The simpler agents (ReAct) operate in a loop: Thought -> Action -> Observation. This is sequential. But complex tasks require parallel planning. Task: “Research 3 companies (A, B, C) and write a comparison report.”

  • Sequential Agent: Research A (wait), Research B (wait), Research C (wait), Write. Total: 30s.
  • DAG Agent: Research A, B, C in parallel. Join results. Write. Total: 10s.

We need to treat Agent Actions not as a “Loop” but as a Dependency Graph.


2. Core Concepts: The Plan-and-Solve Pattern

This pattern separates Reasoning from Execution.

  1. Planner: The LLM generates a DAG.
    • Task 1: Google “Apple stock”.
    • Task 2: Google “Microsoft stock”.
    • Task 3: Compare (Dep: 1, 2).
  2. Executor: A runtime (like Kahn’s Algorithm!) executes tasks.
  3. Replanner: If Task 1 fails, the Planner updates the graph.

3. Architecture Patterns: Graphs vs. Chains

  • Chain: Linear. A -> B -> C. (LangChain Classic).
  • DAG: Directed, No Cycles. A -> B, A -> C, D(B, C). (Parallel Execution).
  • Cyclic Graph: A -> B -> A. (Iterative Refinement).

LangGraph is a library designed specifically to define these state machines as graphs.

graph TD
 Start --> Planner
 Planner --> |Review| Critic
 Critic -- "Bad" --> Planner
 Critic -- "Good" --> Executor
 Executor --> End

4. Implementation Approaches

4.1 Static Graph

The developer defines the graph.

  • “Always research, then summarize.”
  • Reliable, rigid.

4.2 Dynamic Graph (AI-Driven)

The LLM writes the graph.

  • Prompt: “Break this down into subtasks with dependencies.”
  • Output: JSON List of Edges.
  • Flexible, fragile.

5. Code Examples: The Task Graph Executor

A simplified implementation of a DAG runner for agents in Python.

import concurrent.futures

class Task:
    def __init__(self, id, prompt, deps=[]):
        self.id = id
        self.prompt = prompt
        self.deps = deps
        self.result = None

    def execute_dag(tasks, llm_function):
        """
        Executes tasks in topological order, running independent tasks in parallel.
        Uses a ThreadPool.
        """
        completed = set()
        with concurrent.futures.ThreadPoolExecutor() as executor:
            futures = {}

            while len(completed) < len(tasks):
                # 1. Find runnable tasks
                runnable = []
                for t in tasks:
                    if t.id not in completed and t.id not in futures:
                        # Check dependencies
                        if all(d in completed for d in t.deps):
                            runnable.append(t)

                            # 2. Submit to workers
                            for t in runnable:
                                # Inject dependency results into context
                                context = "\n".join([tasks[d].result for d in t.deps])
                                full_prompt = f"Context: {context}\nTask: {t.prompt}"
                                futures[t.id] = executor.submit(llm_function, full_prompt)

                                # 3. Wait for at least one to finish
                                # (In production, use as_completed)
                                # ... logic to harvest results and add to 'completed' set ...

                                return tasks

This is fundamentally Kahn’s Algorithm (DSA) applied to LLM calls.


6. Production Considerations

6.1 State Management

In a DAG, “State” is shared. If Task A and Task B run in parallel, and both try to update memory.txt, you have a race condition. LangGraph solves this by passing a State Object (TypedDict) through the edges. The state is immutable-ish (Postgres check-pointed).

6.2 Human-in-the-Loop

You might want a human to verify Task 1 before Task 2 starts. The Graph Executor must support Pause/Resume. This requires serializing the graph state (Agents!) to a DB.


7. Common Pitfalls

  1. Hallucinated Cycles: The LLM plans Task A needs Task B, and Task B needs Task A. The executor deadlocks.
    • Fix: Validate the JSON dependency list for cycles using DFS before execution.
  2. Context Overflow: Merging results from 10 parallel tasks into the prompt for Task 11 blows up the context window.
    • Fix: Summarize intermediate results (Map-Reduce).

8. Best Practices

  1. Map-Reduce Pattern:
    • Map: “Generate 5 ideas.” (Run 5 LLM calls in parallel).
    • Reduce: “Select the best idea.” (Run 1 LLM call).
  2. Visual Debugging: Use tools like LangSmith to visualize the graph execution. Debugging a 50-step async agent via print logs is impossible.

9. Connections to Other Topics

This connects to Course Schedule (DSA ).

  • The Agent is the student taking courses.
  • The “Prerequisites” are the information dependencies.
  • “You cannot write the Summary (Course 101) until you read the Book (Course 100).”

10. Real-World Examples

  • GPT-Researcher: A popular open-source agent.
  • Generates 5 research questions.
  • Scrapes 5 websites in parallel.
  • Aggregates into one report.
  • OpenAI Deep Research: Uses heavy iterative branching to explore topics depth-first vs breadth-first.

11. Future Directions

  • Multi-Agent Graphs:
  • Node A is “Coder Agent”.
  • Node B is “Reviewer Agent”.
  • The graph defines the “Company Org Chart”.
  • Self-Modifying Graphs: The Agent realizes the plan is bad halfway through and re-writes the remaining graph nodes (Runtime Graph Modification).

12. Key Takeaways

  1. Parallelism = Speed: Agents are slow. Parallelizing sub-tasks is the easiest performance win.
  2. Graph > Chain: Real world workflows are non-linear.
  3. State is King: The edges of the graph transport State. Managing that schema is the hard part of Agent engineering.

Next in the series: Building Domain-Specific Agents – how to trade breadth for depth in vertical AI.


FAQ

What is a dependency graph in AI agent planning?

A dependency graph models agent tasks as a Directed Acyclic Graph (DAG) where nodes are subtasks and edges represent prerequisites. Independent tasks with no dependencies between them can execute in parallel, while dependent tasks wait for their prerequisites to complete. This is fundamentally Kahn’s Algorithm applied to LLM calls.

How does LangGraph handle agent state management?

LangGraph passes a typed State Object (TypedDict) through the edges of the graph. The state is checkpoint-persisted (e.g., to Postgres), making it effectively immutable at each step. This solves race conditions when parallel tasks try to update shared state, and supports pause/resume for human-in-the-loop workflows.

What is the Plan-and-Solve pattern for AI agents?

Plan-and-Solve separates reasoning from execution. A Planner LLM generates a DAG of subtasks with dependencies. An Executor runs them in topological order using parallel workers. A Replanner updates the graph if any task fails. This gives agents structured, parallelizable plans instead of reactive step-by-step loops.

How do you prevent hallucinated cycles in agent dependency graphs?

LLMs sometimes plan circular dependencies (Task A needs B, Task B needs A), which causes executor deadlocks. Fix this by validating the JSON dependency list for cycles using DFS before execution. If a cycle is detected, send the plan back to the LLM for correction.


Originally published at: arunbaby.com/ai-agents/0049-dependency-graphs-for-agents

Want to work together?

I take on projects, advisory roles, and fractional CTO engagements in AI/ML. I also help businesses go AI-native with agentic workflows and agent orchestration.

Get in touch