LangGraph vs LangChain: Which One to Use for AI Agents
The LangChain Ecosystem Has Evolved
If you started building with LangChain in 2023, you might remember it as a sprawling library that tried to do everything. It has matured significantly since then, and one of the most important developments is LangGraph, which is now a separate package designed specifically for building agent workflows.
I have used both LangChain (the library) and LangGraph (the graph-based agent framework) in production. They solve different problems, and choosing the right one depends on what you are building.
LangChain: The Utility Library
LangChain provides building blocks for working with LLMs: prompt templates, output parsers, document loaders, text splitters, memory modules, and integrations with dozens of services. Think of it as a toolkit.
What LangChain is good at:
- Simple chains: When you need to connect a prompt to a model to a parser, LangChain's LCEL (LangChain Expression Language) is clean and concise.
- Document processing: The document loaders and text splitters are genuinely useful and save time.
- Integrations: Need to connect to a specific vector database, model provider, or tool? LangChain probably has an integration.
- RAG pipelines: Building a standard retrieval-augmented generation pipeline is straightforward with LangChain's built-in components.
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import JsonOutputParser
# Simple LangChain chain
prompt = ChatPromptTemplate.from_template(
"Classify this text: {text}\nReturn JSON with 'category' and 'confidence'."
)
model = ChatAnthropic(model="claude-sonnet-4-20250514")
parser = JsonOutputParser()
chain = prompt | model | parser
result = chain.invoke({"text": "The quarterly earnings exceeded expectations..."})
Where LangChain falls short:
- Complex agent workflows: When you need conditional logic, loops, parallel execution, or state management, LangChain's chain abstraction becomes limiting.
- Abstraction overhead: For experienced developers, some abstractions add complexity without adding value. If you know how to call the Claude API directly, wrapping it in LangChain sometimes just adds indirection.
- Debugging: When something goes wrong in a deeply nested chain, tracing the issue can be frustrating.
LangGraph: The Agent Orchestrator
LangGraph models agent workflows as directed graphs. Nodes are functions (typically agent steps), edges define the flow between them, and state flows through the graph accumulating context. This makes complex workflows explicit and inspectable.
What LangGraph is good at:
- Multi-agent orchestration: When you have multiple agents that need to coordinate, LangGraph's graph structure makes the workflow clear and maintainable.
- Conditional routing: Need agent A to send work to agent B or agent C depending on the output? Conditional edges handle this cleanly.
- Loops and retries: Quality control loops where a reviewer sends work back to a generator are first-class patterns in LangGraph.
- State management: The typed state object makes it clear what data is available at each step.
- Persistence and checkpointing: Long-running workflows can be saved and resumed, which is essential for production agents.
from langgraph.graph import StateGraph, END
from typing import TypedDict
class AgentState(TypedDict):
task: str
draft: str
review: str
quality_score: float
def generate(state: AgentState) -> AgentState:
# Generate content based on task
draft = generation_model.invoke(state["task"])
return {"draft": draft}
def review(state: AgentState) -> AgentState:
# Review the draft
review_result = review_model.invoke(state["draft"])
return {"review": review_result["feedback"],
"quality_score": review_result["score"]}
def route_after_review(state: AgentState) -> str:
return "generate" if state["quality_score"] < 0.8 else END
graph = StateGraph(AgentState)
graph.add_node("generate", generate)
graph.add_node("review", review)
graph.set_entry_point("generate")
graph.add_edge("generate", "review")
graph.add_conditional_edges("review", route_after_review)
workflow = graph.compile()
Where LangGraph is overkill:
- Simple, linear tasks: If your workflow is just prompt, model, parse, a LangGraph graph adds unnecessary complexity.
- One-shot API calls: If you are just wrapping a single API call, you do not need a graph.
- Prototyping: When you are still figuring out what your agent should do, the upfront structure of LangGraph can slow you down.
My Decision Framework
Here is how I decide between them:
Use LangChain (alone) when:
- You are building a straightforward RAG pipeline
- Your workflow is linear with no branching or loops
- You want convenient integrations with external services
- The task is simple enough that a single chain handles it
Use LangGraph when:
- You have multiple agents that need to coordinate
- Your workflow has conditional branching or loops
- Quality control with retry logic is important
- You need persistence or human-in-the-loop capabilities
- The workflow is complex enough that visualising it as a graph helps you reason about it
Use neither when:
- Your needs are simple enough that direct API calls (using the Anthropic or OpenAI SDKs) are clearer
- You want maximum control and minimal abstraction
- You are building something highly custom that does not fit the patterns these frameworks expect
They Work Together
An important point: LangGraph and LangChain are not mutually exclusive. LangGraph nodes can use LangChain components internally. I frequently use LangChain's prompt templates and output parsers inside LangGraph nodes, getting the best of both: structured orchestration from LangGraph and convenient utilities from LangChain.
My Recommendation
If you are building anything with more than two agents or any kind of conditional workflow, go directly to LangGraph. The upfront cost of defining a graph is small, and it pays off massively in maintainability and debuggability. For simpler tasks, LangChain or direct API calls are perfectly fine. The worst thing you can do is use a complex framework for a simple problem or a simple framework for a complex problem. Match the tool to the task.