[AI] Understanding LangGraph Workflows

[AI] Understanding LangGraph Workflows

May 5, 2025

Wait…what it is LangGraph?

Let me jump forward a little bit and give you a summary from the project we are going to see:

What it is LangGraph? According to Local Deep Research Analysis… 📌
  • I provided: can you give me a summary on what it is langgraph and how it relates to AI agents?
  • And the web research query generated was: LangGraph overview and its role in AI agents 2025

Summary

LangGraph is an advanced orchestration framework built on top of LangChain, designed to create, deploy, and manage autonomous AI agents powered by large language models (LLMs).

It models AI workflows as stateful graphs, where nodes represent individual agents or task stages and edges define the flow of control, supporting complex behaviors such as branching, loops, and conditional transitions.

Unlike traditional DAG-based frameworks, LangGraph introduces cyclic computational capabilities, enabling agents to maintain and update context or memory dynamically across interactions for coherent, goal-directed behavior.

At its core, LangGraph’s stateful graph architecture allows AI agents to remember conversation history, track events, and adapt decisions based on evolving information.

This makes it well-suited for building sophisticated multi-agent systems that can perform complex, multi-step tasks autonomously. LangGraph also integrates dynamic external tools and supports both short-term session memory and long-term persistent memory, often leveraging tools like Zep for automatic fact management, enhancing agents’ contextual awareness and decision-making over time.

The framework provides fine-grained orchestration and control over agent workflows, including APIs for state management, debugging tools such as a visual studio, and multiple deployment options for scalable, reliable AI agent development.

It facilitates human-in-the-loop interventions when necessary and exemplifies its capabilities in applications like multi-agent flight and hotel booking systems that handle conditional routing and tool operations.

Overall, LangGraph serves as a foundational cognitive architecture that balances agent control with agency, enabling the creation of intelligent, adaptable AI agents applicable across diverse domains such as business process automation, complex decision-making, and interactive AI applications like chatbots and virtual assistants.

It is recognized for driving innovation in next-generation AI applications and integrates seamlessly with cloud platforms like Amazon Bedrock for scalable multi-agent deployments.

Sources:

So, in a way, its a framework to make customized LLM workflows.

One step forward of what I covered with the groq YT summarizer:

And allow us to do not just DAG workflows, like with LangChain:

About LangGraph

LangGraph is one of the most used tools recently to build AI apps.

LangGraph is a Python library under the MIT license designed for building programatically sophisticated AI applications involving multiple agents and complex, stateful conversational workflows.

It offers a programmatic and flexible way to define the interactions and state transitions between different components (agents or steps) within these workflows.

Key features and benefits include:

  • Building complex conversational workflows: Enabling the creation of multi-turn interactions and intricate logic.
  • Agent orchestration: Providing tools to manage and coordinate the actions of multiple agents.
  • Customizable architectures: Allowing developers to define unique interaction patterns between agents.
  • Long-term memory: Facilitating the persistence of information across multiple steps and agent interactions.
  • Human-in-the-loop: Integrating human intervention into the workflow for review or decision-making.
  • Resilient language agents: Aiming to create robust and reliable AI systems capable of handling complex tasks.

In essence, LangGraph provides a more structured and code-driven approach to building multi-agent systems compared to more declarative methods, offering greater control and flexibility for complex AI applications.

MIT | Build resilient language agents as graphs

Lets have a look to a sample project with langGraph

Local Deep Researcher Project

To properly understand LangGraph, I was in search of a cool project.

I have cloned the original project: https://github.com/langchain-ai/local-deep-researcher

ollama-deep-research

MIT | Fully local web research and report writing assistant

You will need to follow these steps and have ollama ready:

git clone https://github.com/JAlcocerT/local-deep-researcher #it was called ollama deep researcher before

Adjust the .env file with the search engine and local LLM to be used.

By default, duckduckgo dooes not require to have any API associated, and for the LLM you can use:

docker exec -it ollama sh
ollama pull deepseek-r1:8b

Once ready, run the project with UV:

uvx \
                 --refresh \
                 --from "langgraph-cli[inmem]" \
                 --with-editable . \
                 --python 3.11 \
                 langgraph dev

You will get a a firefox browser tab with the following diagram and place to ask questions:

Local Deep Researcher with Ollama and DuckDuckGo

I could not resist to ask to provide a research on Titanium for bycicle applications

ℹ️
By default uses duckduckgo, with no API key required. You will need one for SearXNG, Tavily or Perplexity

The dependencies are specified here

The Tech behind local deep researcher | LangChain + LangGraph + Ollama 📌
  1. LangGraph – Declared in pyproject.toml (“langgraph>=0.2.55”) – Used to build the research‐pipeline state-machine in src/ollama_deep_researcher/graph.py – The Dockerfile / CMD spins up the LangGraph dev server (via langgraph-cli)
  2. LangChain (and its connectors) – Core runtime: langchain_core – OpenAI connector, which is used for LMStudio: langchain_openai / openaiOllama connector: langchain_ollama – Community extras: langchain-community – You’ll see these imported in: • src/ollama_deep_researcher/lmstudio.py • src/ollama_deep_researcher/graph.py
  3. Pydantic – For your typed configuration model (env‐driven settings) – Check out src/ollama_deep_researcher/configuration.py
  4. Search-and-scraping utilities – HTTP clients: httpx & requests – HTML→Markdown: markdownifySearch backends: duckduckgo-search, tavily-python, langchain-community’s SearxSearchWrapper – All wired up in src/ollama_deep_researcher/utils.py
  5. Environment & CLI glue – python-dotenv for .env loading – uv/uvx (the “uv” package manager) + langgraph-cli[inmem] to launch the dev server (Dockerfile)

When you run:

langgraph dev #coming from the separated langgraph-cli tool

Run LangGraph API server in development mode with hot reloading

It spins up a little ASGI web‐server and opens the browser UI for you. Under the hood, the CLI uses:

  • FastAPI (built on Starlette) – to expose the HTTP (and WebSocket) endpoints – serves the static “prebuilt” React app
  • Uvicorn – as the ASGI server to actually run the FastAPI app
  • React (TypeScript + Vite) – the single-page app you see in your browser, bundled into the langgraph-prebuilt package

All of your “business­-logic” lives in the src/ollama_deep_researcher folder.

See this diagram for clarity on the execution flow:

flowchart TD
    %%--------------------------------
    %% Server Startup / Graph Loading
    %%--------------------------------
    subgraph "LangGraph CLI Startup"
    CLI["langgraph dev (CLI)"]
    CFGJSON["Read langgraph.json"]
    GRAPHDEF["Load graph from:\nsrc/ollama_deep_researcher/graph.py"]
    CLI --> CFGJSON --> GRAPHDEF
    end

    subgraph "Graph Initialization"
    CONFIG["configuration.py\n(define Configuration schema)"]
    STATE["state.py\n(SummaryState, Input, Output)"]
    PROMPTS["prompts.py\n(query/summarize/reflect templates)"]
    UTILS["utils.py\n(search wrappers, formatters)"]
    LMMOD["lmstudio.py\n(ChatLMStudio wrapper)"]
    GRAPHDEF --> CONFIG --> STATE --> PROMPTS --> UTILS --> LMMOD
    end

    %%--------------------------------
    %% User Execution Flow
    %%--------------------------------
    subgraph "Research Pipeline Execution"
    Start(["Invoke Graph\nwith research_topic"]) --> GQ["generate_query\n(graph.py)"]
    GQ -->|uses| QPROM["prompts.py\nquery_writer_instructions"]
    GQ -->|uses| CFG
    GQ -->|uses| LMMOD
    GQ -->|outputs| SQ[/"search_query"/]

    SQ --> WR["web_research\n(graph.py)"]
    WR -->|uses| CFG
    WR -->|uses| UTILS
    WR -->|outputs| WRR[/"web_research_results"/] & SG[/"sources_gathered"/]

    WRR & SG --> SUM["summarize_sources\n(graph.py)"]
    SUM -->|uses| CFG
    SUM -->|uses| LMMOD
    SUM -->|uses| UTILS
    SUM -->|outputs| RS[/"running_summary"/]

    RS --> REF["reflect_on_summary\n(graph.py)"]
    REF -->|uses| CFG
    REF -->|uses| LMMOD
    REF -->|uses| RPROM["prompts.py\nreflection_instructions"]
    REF --> ROUTE{"route_research\n(graph.py)"}

    ROUTE -- Continue --> WR
    ROUTE -- Finalize --> FIN["finalize_summary\n(graph.py)"]
    FIN --> FS[/"final_summary"/]

    FS --> End(["Graph Complete\nSummaryStateOutput"])
    end

The piece that LangGraph is actually serving (and that shows up in Studio as your state‐machine/UI) is the graph object exported from the langgraph.json file.

  • src/ollama_deep_researcher/graph.py This is where you build your StateGraph: – add nodes (generate_query, web_research, summarize_sources, reflect_on_summary, finalize_summary) – wire up edges (including the route_research function that loops or finalizes) – compile into the graph that LangGraph Studio runs.

To change the flow of your LLM calls (add/remove/branch nodes, change loop logic), this is the file to edit.

Beyond that, there are three supporting “extension points” you’ll almost certainly want to tweak when you customize your LLM‐driven pipeline:

  1. Prompts src/ollama_deep_researcher/prompts.py
    • All of your JSON templates and instructions live here.
    • Adjust your system messages or output formats.
  2. LLM‐wrapper classes src/ollama_deep_researcher/lmstudio.py (and the use of ChatOllama in graph.py)
    • Swap in new providers or change temperatures, streaming modes, etc.
  3. Configuration & utilities
    • src/ollama_deep_researcher/configuration.py controls env-vars, defaults for model name, provider, loop count, etc.
    • src/ollama_deep_researcher/utils.py web-search implementations; strip tokens, format sources, etc.

If you ever need to go deeper—for example, to completely redesign the LangGraph Studio UI—you’d have to clone and modify the langgraph-cli repo (FastAPI + React/TypeScript).

The graph.py for the orchestration logic and prompts.py for what you ask the LLM to do.

The purpose of state.py is to define the “shape” of the in‐memory state that your LangGraph will carry around as it executes your research pipeline.

Concretely the state file provides three dataclasses:

  1. SummaryState
  • This is your running state object. It has fields for: – research_topic: the user’s original topic – search_query: the current query you asked the LLM to run – web_research_results: a list of the raw text results returned by each web_research node – sources_gathered: a list of the formatted source metadata you’ll feed into summaries – research_loop_count: how many times you’ve gone through the loop so far – running_summary: your accumulated summary text
  • Notice that web_research_results and sources_gathered are wrapped with Annotated[…, operator.add]. That tells LangGraph that whenever a node returns a new list for one of those fields, it should do old_list + new_list (i.e. append) rather than overwrite.
  1. SummaryStateInput
  • Defines the very first inputs your graph expects when you call .invoke(...).
  • Here it just has research_topic.
  1. SummaryStateOutput
  • Defines what you get back when the graph finishes.
  • Here it just exposes running_summary.

When you do

StateGraph(
SummaryState,
input=SummaryStateInput,
output=SummaryStateOutput,
config_schema=
)

You’re telling LangGraph:

  • “This is the shape of my mutable state and how to merge updates.”
  • “This is the payload I start with.”
  • “This is the payload I return at the end.”

If you want to track additional bits of data (say timing metrics, alternate summaries, etc.) or change how fields combine between nodes, this is the file you’d edit.

Tweaks to Local Deep Researcher

I dont have a crazy fast GPU, so my first thought was to use codex to simply plug alternatively OpenAI models:

Other LangGraph Resurces

Big Tool

LangGraph Big Tool caught my attention while writting the RAG Post