Auto-Instrumentation
Zero-config tracing for LangChain and LangGraph applications.
Overview
Auto-instrumentation automatically traces all LLM calls, tool invocations, and chain executions without modifying your code.
LangChain
Installation
Usage
What Gets Traced
- LLM invocations with token counts
- Prompt templates and chains (LCEL)
- Tool calls with inputs and outputs
- Streaming responses
- Errors and exceptions
LCEL Chain Example
Tools Example
LangGraph
Installation
Usage
Trace Output
Each node in the graph produces spans following semantic conventions. In this example, researcher and analyst are not named with agent keywords, so their LLM calls are traced as ai.llm.invoke spans directly.
Spans include attributes for model name, provider, token counts (input/output), and tool names. To get ai.agent.invoke spans for multi-agent systems, see Multi-Agent Tracing.
Combining with Decorators
Auto-instrumentation works alongside @observe and @endpoint:
Supported Frameworks
| Framework | Extra | Status |
|---|---|---|
| LangChain | langchain | Supported |
| LangGraph | langgraph | Supported |