Skip to content

LangGraph

LangGraph Logo LangGraph Logo

LangGraph Logo


Version Downloads Open Issues Docs

Note

Looking for the JS version? See the JS repo and the JS docs.

LangGraph is a low-level orchestration framework for building controllable agents. While LangChain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks.

Get started

First, install LangGraph:

pip install -U langgraph

There are two ways to get started with LangGraph:

  • Use prebuilt components: Construct agentic systems quickly and reliably without the need to implement orchestration, memory, or human feedback handling from scratch.
  • Use LangGraph: Customize your architectures, use long-term memory, and implement human-in-the-loop to reliably handle complex tasks.

Once you have a LangGraph application and are ready to move into production, use LangGraph Platform to test, debug, and deploy your application.

What LangGraph provides

LangGraph provides low-level supporting infrastructure that sits underneath any workflow or agent. It does not abstract prompts or architecture, and provides three central benefits:

Persistence

LangGraph has a persistence layer, which offers a number of benefits:

  • Memory: LangGraph persists arbitrary aspects of your application's state, supporting memory of conversations and other updates within and across user interactions;
  • Human-in-the-loop: Because state is checkpointed, execution can be interrupted and resumed, allowing for decisions, validation, and corrections via human input.

Streaming

LangGraph provides support for streaming workflow / agent state to the user (or developer) over the course of execution. LangGraph supports streaming of both events (such as feedback from a tool call) and tokens from LLM calls embedded in an application.

Debugging and deployment

LangGraph provides an easy onramp for testing, debugging, and deploying applications via LangGraph Platform. This includes Studio, an IDE that enables visualization, interaction, and debugging of workflows or agents. This also includes numerous options for deployment.

LangGraph’s ecosystem

While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:

  • LangSmith — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
  • LangGraph Platform — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in LangGraph Studio.

Additional resources

  • Guides: Quick, actionable code snippets for topics such as streaming, adding memory & persistence, and design patterns (e.g. branching, subgraphs, etc.).
  • Reference: Detailed reference on core classes, methods, how to use the graph and checkpointing APIs, and higher-level prebuilt components.
  • Examples: Guided examples on getting started with LangGraph.
  • LangChain Academy: Learn the basics of LangGraph in our free, structured course.
  • Templates: Pre-built reference apps for common agentic workflows (e.g. ReAct agent, memory, retrieval etc.) that can be cloned and adapted.
  • Case studies: Hear how industry leaders use LangGraph to ship powerful, production-ready AI applications.

Acknowledgements

LangGraph is inspired by Pregel and Apache Beam. The public interface draws inspiration from NetworkX. LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.