Auto-Instrumentation
Zero-config tracing for LangChain and LangGraph applications.
Overview
Auto-instrumentation automatically traces all LLM calls, tool invocations, and chain executions without modifying your code.
API at a glance
The full auto-instrumentation surface lives in rhesis.sdk.telemetry:
| Call | Behavior |
|---|---|
auto_instrument() | Tries every supported framework and enables the ones whose package is importable. |
auto_instrument("langchain", ...) | Enables only the named frameworks. Unknown names are logged as warnings but do not raise. |
disable_auto_instrument() | Disables every framework that was previously enabled in this process. |
The function returns the list of frameworks it actually instrumented, so you can log it in your bootstrap:
LangChain
Installation
Usage
What Gets Traced
- LLM invocations with token counts
- Prompt templates and chains (LCEL)
- Tool calls with inputs and outputs
- Streaming responses
- Errors and exceptions
LCEL Chain Example
Tools Example
Under the hood the integration registers a callback handler globally and patches BaseTool.invoke / BaseTool.ainvoke, so tool spans fire even when the framework’s normal callback plumbing is bypassed by user code.
LangGraph
Installation
Usage
How LangChain and LangGraph share a callback
LangGraph runs on top of LangChain’s callback system. To avoid emitting duplicate spans when both are present, the LangGraph integration reuses the singleton LangChain callback instead of creating its own. That means:
auto_instrument("langgraph")already covers LangChain chains, LCEL pipelines, tools, and LLM calls invoked from inside graph nodes — you do not need to add"langchain"explicitly.- Calling
auto_instrument("langchain", "langgraph")is safe and idempotent: the second integration finds the callback already registered and only adds the graph-method patches on top. - The integration also patches
CompiledStateGraph.invoke/ainvoke/stream/astream, so every graph entry point injects the callback automatically — no need to thread it throughconfig={"callbacks": [...]}yourself.
Trace Output
Each node in the graph produces spans following semantic conventions. In this example, researcher and analyst are not named with agent keywords, so their LLM calls are traced as ai.llm.invoke spans directly.
Spans include attributes for model name, provider, token counts (input/output), and tool names. To get ai.agent.invoke spans for multi-agent systems, see Multi-Agent Tracing.
Combining with Decorators
Auto-instrumentation works alongside @observe and @endpoint:
Manual callback injection (advanced)
In nearly all cases, auto_instrument() is enough — the SDK patches the framework entry points and traces fire transparently. For the rare situation where you build a custom wrapper around CompiledStateGraph that bypasses the patched methods, you can fetch the active callback and pass it through yourself:
get_callback() returns None if LangChain instrumentation has not been enabled in this process.
Disabling
To turn off every previously-enabled framework — for example before reconfiguring tracing in a test fixture — call disable_auto_instrument():
Supported Frameworks
| Framework | Mechanism | Extra | Status |
|---|---|---|---|
| LangChain | Auto-instrument (callback + tool patch) | langchain | Supported |
| LangGraph | Auto-instrument (callback + graph patch) | langgraph | Supported |
| Other Python frameworks (CrewAI, OpenAI Agents SDK, LlamaIndex, AutoGen, …) | @observe.* decorators | n/a | Use Decorators |
For frameworks not in the auto-instrument list, wrap the functions, tools, or agents you want to trace with @observe.llm, @observe.tool, @observe.retrieval, etc. Without decorators only top-level inputs and outputs are captured.
To add a new framework to the auto-instrument list, see Contributing: SDK Integrations.
Related:
- Setup - Initial configuration
- Decorators -
@observeand@endpoint - Multi-Agent Tracing - Agent and handoff spans
- Integrations - The four integration layers at a glance
- Connector - Register functions as endpoints