Skip to Content
DocsTracingAuto-Instrumentation

Auto-Instrumentation

Zero-config tracing for LangChain and LangGraph applications.

Overview

Auto-instrumentation automatically traces all LLM calls, tool invocations, and chain executions without modifying your code.

API at a glance

The full auto-instrumentation surface lives in rhesis.sdk.telemetry:

app.py
from rhesis.sdk.telemetry import auto_instrument, disable_auto_instrument

auto_instrument()                              # auto-detect installed frameworks
auto_instrument("langchain")                   # explicit single framework
auto_instrument("langchain", "langgraph")      # explicit multiple frameworks
disable_auto_instrument()                      # turn everything off
CallBehavior
auto_instrument()Tries every supported framework and enables the ones whose package is importable.
auto_instrument("langchain", ...)Enables only the named frameworks. Unknown names are logged as warnings but do not raise.
disable_auto_instrument()Disables every framework that was previously enabled in this process.

The function returns the list of frameworks it actually instrumented, so you can log it in your bootstrap:

bootstrap.py
enabled = auto_instrument()
print(f"Tracing enabled for: {enabled}")  # e.g. ['langchain', 'langgraph']

LangChain

Installation

terminal
pip install "rhesis-sdk[langchain]>=0.6.0"

Usage

app.py
from rhesis.sdk import RhesisClient
from rhesis.sdk.telemetry import auto_instrument
from langchain_google_genai import ChatGoogleGenerativeAI

# Initialize Rhesis
client = RhesisClient(
    api_key="your-api-key",
    project_id="your-project-id",
)

# Enable auto-instrumentation
auto_instrument()

# Use LangChain normally - all calls are traced
llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-exp")
response = llm.invoke("Explain quantum computing")

What Gets Traced

  • LLM invocations with token counts
  • Prompt templates and chains (LCEL)
  • Tool calls with inputs and outputs
  • Streaming responses
  • Errors and exceptions

LCEL Chain Example

chain.py
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You explain concepts in a {style} way."),
    ("user", "Explain {topic}"),
])

chain = prompt | llm

# Automatically traced
result = chain.invoke({"topic": "Machine Learning", "style": "simple"})

Tools Example

tools.py
from langchain_core.tools import tool

@tool
def calculator(expression: str) -> str:
    """Evaluate a mathematical expression."""
    return str(eval(expression))

# Tool calls are automatically traced
result = calculator.invoke({"expression": "2 + 2 * 3"})

Under the hood the integration registers a callback handler globally and patches BaseTool.invoke / BaseTool.ainvoke, so tool spans fire even when the framework’s normal callback plumbing is bypassed by user code.

LangGraph

Installation

terminal
pip install "rhesis-sdk[langgraph]>=0.6.0"

Usage

app.py
from rhesis.sdk import RhesisClient
from rhesis.sdk.telemetry import auto_instrument
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict, Annotated
from langgraph.graph.message import add_messages

client = RhesisClient(
    api_key="your-api-key",
    project_id="your-project-id",
)

# Enable LangGraph instrumentation
auto_instrument("langgraph")

class State(TypedDict):
    messages: Annotated[list, add_messages]

def researcher(state: State):
    response = llm.invoke(state["messages"][-1].content)
    return {"messages": [response]}

def analyst(state: State):
    response = llm.invoke(f"Analyze: {state['messages'][-1].content}")
    return {"messages": [response]}

# Build graph
workflow = StateGraph(State)
workflow.add_node("researcher", researcher)
workflow.add_node("analyst", analyst)
workflow.add_edge(START, "researcher")
workflow.add_edge("researcher", "analyst")
workflow.add_edge("analyst", END)

app = workflow.compile()

# All nodes and LLM calls are traced
result = app.invoke({"messages": ["What are the benefits of LangGraph?"]})

How LangChain and LangGraph share a callback

LangGraph runs on top of LangChain’s callback system. To avoid emitting duplicate spans when both are present, the LangGraph integration reuses the singleton LangChain callback instead of creating its own. That means:

  • auto_instrument("langgraph") already covers LangChain chains, LCEL pipelines, tools, and LLM calls invoked from inside graph nodes — you do not need to add "langchain" explicitly.
  • Calling auto_instrument("langchain", "langgraph") is safe and idempotent: the second integration finds the callback already registered and only adds the graph-method patches on top.
  • The integration also patches CompiledStateGraph.invoke / ainvoke / stream / astream, so every graph entry point injects the callback automatically — no need to thread it through config={"callbacks": [...]} yourself.

Trace Output

Each node in the graph produces spans following semantic conventions. In this example, researcher and analyst are not named with agent keywords, so their LLM calls are traced as ai.llm.invoke spans directly.

Spans include attributes for model name, provider, token counts (input/output), and tool names. To get ai.agent.invoke spans for multi-agent systems, see Multi-Agent Tracing.

Combining with Decorators

Auto-instrumentation works alongside @observe and @endpoint:

app.py
from rhesis.sdk import RhesisClient, endpoint
from rhesis.sdk.telemetry import auto_instrument

client = RhesisClient(...)
auto_instrument()

@endpoint()
def chat_handler(input: str) -> dict:
    # This function traced by @endpoint
    # Internal LangChain calls traced by auto-instrumentation
    chain = prompt | llm
    return {"output": chain.invoke({"message": input})}

Manual callback injection (advanced)

In nearly all cases, auto_instrument() is enough — the SDK patches the framework entry points and traces fire transparently. For the rare situation where you build a custom wrapper around CompiledStateGraph that bypasses the patched methods, you can fetch the active callback and pass it through yourself:

custom_runner.py
from rhesis.sdk.telemetry import auto_instrument
from rhesis.sdk.telemetry.integrations.langchain import get_callback

auto_instrument("langgraph")

callback = get_callback()
config = {"callbacks": [callback]} if callback else {}
result = my_custom_invoke(graph, state, config=config)

get_callback() returns None if LangChain instrumentation has not been enabled in this process.

Disabling

To turn off every previously-enabled framework — for example before reconfiguring tracing in a test fixture — call disable_auto_instrument():

teardown.py
from rhesis.sdk.telemetry import disable_auto_instrument

disable_auto_instrument()

Supported Frameworks

FrameworkMechanismExtraStatus
LangChainAuto-instrument (callback + tool patch)langchainSupported
LangGraphAuto-instrument (callback + graph patch)langgraphSupported
Other Python frameworks (CrewAI, OpenAI Agents SDK, LlamaIndex, AutoGen, …)@observe.* decoratorsn/aUse Decorators

For frameworks not in the auto-instrument list, wrap the functions, tools, or agents you want to trace with @observe.llm, @observe.tool, @observe.retrieval, etc. Without decorators only top-level inputs and outputs are captured.

To add a new framework to the auto-instrument list, see Contributing: SDK Integrations.


Related: