Skip to Content
TracingDecorators

Decorators

Two decorators are available: @observe for tracing-only and @endpoint for remote testing with automatic tracing.

@endpoint

Functions decorated with @endpoint are automatically traced. Use this for functions that need remote testing capability.

app.py
from rhesis.sdk import endpoint

@endpoint()
def chat(input: str, session_id: str = None) -> dict:
    # Automatically traced + registered for remote testing
    return {"output": process_message(input), "session_id": session_id}

See the Connector documentation for full details on @endpoint.

Disabling Tracing

To register for remote testing without tracing:

app.py
@endpoint(observe=False)
def no_trace_endpoint(x: int) -> int:
    return x * 2

@observe

Use @observe for functions that only need tracing (no remote testing):

app.py
from rhesis.sdk import observe

@observe()
def internal_helper(data: str) -> str:
    return data.upper()

Convenience Decorators

Pre-configured decorators for common AI operations, organized by category.

AI Model Operations

@observe.llm()

For language model calls.

app.py
@observe.llm(provider="openai", model="gpt-4")
def generate(prompt: str) -> str:
    return openai.chat.completions.create(...)
ParameterRequiredDescription
providerYesProvider name (openai, anthropic, google)
modelYesModel name (gpt-4, claude-3-opus)

@observe.embedding()

For embedding generation.

app.py
@observe.embedding(model="text-embedding-ada-002", dimensions=1536)
def embed_texts(texts: list) -> list:
    return embedding_model.encode(texts)
ParameterRequiredDescription
modelYesEmbedding model name
dimensionsNoVector dimensions

Tool & Retrieval

@observe.tool()

For tool/function execution.

app.py
@observe.tool(name="weather_api", tool_type="http")
def get_weather(city: str) -> dict:
    return requests.get(f"api/{city}").json()
ParameterRequiredDescription
nameYesTool name
tool_typeYesType (http, function, database)

@observe.retrieval()

For vector search and knowledge base queries.

app.py
@observe.retrieval(backend="pinecone", top_k=5)
def search_docs(query: str) -> list:
    return vector_db.search(query, k=5)
ParameterRequiredDescription
backendYesBackend name (pinecone, weaviate, chroma)
top_kNoNumber of results

@observe.rerank()

For reranking search results.

app.py
@observe.rerank(model="rerank-v1", top_n=10)
def rerank_documents(query: str, docs: list) -> list:
    return reranker.rerank(query, docs, top_n=10)
ParameterRequiredDescription
modelYesReranker model name
top_nNoNumber of results to return

Quality & Safety

@observe.evaluation()

For response evaluation and scoring.

app.py
@observe.evaluation(metric="relevance", evaluator="gpt-4")
def evaluate_relevance(query: str, response: str) -> float:
    return evaluator.score_relevance(query, response)
ParameterRequiredDescription
metricYesMetric name (relevance, faithfulness)
evaluatorYesEvaluator model/service

@observe.guardrail()

For content safety and moderation.

app.py
@observe.guardrail(guardrail_type="content_safety", provider="openai")
def check_content_safety(text: str) -> bool:
    return safety_checker.is_safe(text)
ParameterRequiredDescription
guardrail_typeYesType (content_safety, pii_detection, toxicity)
providerYesProvider name

Data Processing

@observe.transform()

For data transformation and preprocessing.

app.py
@observe.transform(transform_type="text", operation="clean")
def preprocess_text(text: str) -> str:
    return clean_and_normalize(text)
ParameterRequiredDescription
transform_typeYesType (text, image, audio)
operationYesOperation (clean, normalize, tokenize)

Comparison

Feature@observe@endpoint
Traces executionYesYes (default)
Remote testingNoYes
Use caseInternal helpersPublic APIs, endpoints

Usage Pattern

app.py
from rhesis.sdk import endpoint, observe

# Public API - remote testing + automatic tracing
@endpoint()
def chat(input: str) -> dict:
    context = build_context(input)
    response = generate_response(input, context)
    return {"output": response}

# Internal helper - tracing only
@observe()
def build_context(message: str) -> list:
    return search_docs(message)

# LLM call - convenience decorator
@observe.llm(provider="openai", model="gpt-4")
def generate_response(message: str, context: list) -> str:
    return llm.generate(message, context=context)

# Retrieval - convenience decorator
@observe.retrieval(backend="pinecone", top_k=5)
def search_docs(query: str) -> list:
    return vector_db.search(query)

Next: Learn about custom spans for advanced attribute configuration.