Tracing
OpenTelemetry-based observability for AI applications
Rhesis Tracing provides comprehensive observability for your AI applications built on the OpenTelemetry standard. It captures detailed traces of LLM calls, tool invocations, and retrieval operations with a semantic layer designed specifically for AI workloads.
Key Features
- OpenTelemetry Standard - Built on industry-standard OTLP protocol
- AI Semantic Layer - Framework-agnostic naming conventions for AI operations
- Two Operating Modes - Test mode (linked to test runs) and production mode (live monitoring)
- Auto-Instrumentation - Zero-config tracing for LangChain and LangGraph
- Convenience Decorators - Pre-configured decorators for common AI operations
See Tracing in Action
Watch this short video to see how Rhesis Tracing captures and visualizes AI operations.
Core Concepts
What is a Trace?
A trace represents the complete journey of a single request through your application. It captures everything that happens from when a user sends a message to when they receive a response.
What is a Span?
A span represents a single operation within a trace. Each function call, LLM invocation, or tool execution creates a span with:
- Name - What operation occurred (e.g.,
ai.llm.invoke,function.chat) - Duration - How long it took
- Attributes - Metadata like model name, token counts, or tool parameters
- Status - Success or error
Trace Hierarchy
Spans are organized in a parent-child hierarchy. The root span represents the entry point, with child spans for each nested operation:
In this example:
- The trace captures a complete chat interaction
- The root span (
function.chat) is the entry point - Child spans show each nested operation with timing
Traces Dashboard
View all traces from your application in the Rhesis dashboard. Each row shows the operation name, linked endpoint, duration, span count, status, and environment.

Quick Start
Endpoints Are Automatically Traced
Functions decorated with @endpoint are automatically traced. See the Connector documentation for details on endpoint registration.
How It Works
Traces are sent via HTTP (not WebSocket), batched and exported every 5 seconds.
Operating Modes
Test Mode
Traces originate from test runs triggered through the platform or SDK. Linked data is preserved:
- The endpoint being tested
- The test run that initiated the trace
- The specific test case being executed
Click any trace to view its span hierarchy, timing breakdown, and linked test results:

Production Mode
Traces originate from normal application operation. They capture live behavior for monitoring and performance analysis.
Next Steps
- Getting Started - Configure tracing in your application
- Decorators - Learn about
@observeand@endpoint - Connector - Register functions as testable endpoints