Skip to Content
TracingCustom Spans

Custom Spans

Create custom spans with specific names and attributes for advanced observability needs.

Custom Span Names

Use the span_name parameter to set a semantic span name:

app.py
from rhesis.sdk import observe

# Custom span name following ai.<domain>.<action> pattern
@observe(span_name="ai.llm.invoke")
def my_custom_llm_call(prompt: str) -> str:
    return llm.complete(prompt)

Custom Attributes

Pass additional attributes directly to the decorator:

app.py
from rhesis.sdk import observe
from rhesis.sdk.telemetry import AIAttributes

@observe(
    span_name="ai.llm.invoke",
    **{
        AIAttributes.MODEL_PROVIDER: "custom-provider",
        AIAttributes.MODEL_NAME: "custom-model",
        AIAttributes.LLM_TEMPERATURE: 0.7,
    }
)
def custom_llm(prompt: str) -> str:
    return custom_model.generate(prompt)

Attribute Constants

Import attribute constants from rhesis.sdk.telemetry:

app.py
from rhesis.sdk.telemetry import AIAttributes, AIEvents

Model Attributes

ConstantKeyDescription
MODEL_PROVIDERai.model.providerProvider name (openai, anthropic)
MODEL_NAMEai.model.nameModel identifier (gpt-4, claude-3)

LLM Attributes

ConstantKeyDescription
LLM_TOKENS_INPUTai.llm.tokens.inputInput token count
LLM_TOKENS_OUTPUTai.llm.tokens.outputOutput token count
LLM_TOKENS_TOTALai.llm.tokens.totalTotal token count
LLM_TEMPERATUREai.llm.temperatureTemperature parameter
LLM_MAX_TOKENSai.llm.max_tokensMax tokens parameter

Tool Attributes

ConstantKeyDescription
TOOL_NAMEai.tool.nameTool name
TOOL_TYPEai.tool.typeType (http, function, database)

Retrieval Attributes

ConstantKeyDescription
RETRIEVAL_BACKENDai.retrieval.backendBackend (pinecone, weaviate)
RETRIEVAL_TOP_Kai.retrieval.top_kNumber of results

Embedding Attributes

ConstantKeyDescription
EMBEDDING_MODELai.embedding.modelModel name
EMBEDDING_VECTOR_SIZEai.embedding.vector.sizeVector dimensions

Event Names

ConstantValueDescription
AIEvents.PROMPTai.promptPrompt sent to LLM
AIEvents.COMPLETIONai.completionLLM completion
AIEvents.TOOL_INPUTai.tool.inputTool input data
AIEvents.TOOL_OUTPUTai.tool.outputTool output data

Helper Functions

Use helper functions to create attribute dictionaries:

app.py
from rhesis.sdk.telemetry import create_llm_attributes, create_tool_attributes

# Create LLM attributes
attrs = create_llm_attributes(
    provider="openai",
    model_name="gpt-4",
    tokens_input=150,
    tokens_output=200,
)

# Create tool attributes
attrs = create_tool_attributes(
    tool_name="weather_api",
    tool_type="http",
)

Building Custom Decorators

Create your own convenience decorators by wrapping @observe:

decorators.py
from rhesis.sdk import observe
from rhesis.sdk.telemetry import AIAttributes
from rhesis.sdk.telemetry.schemas import AIOperationType

def my_custom_llm(provider: str, model: str, **extra):
    """Custom decorator for your specific LLM setup."""
    return observe(
        span_name=AIOperationType.LLM_INVOKE,
        **{
            AIAttributes.MODEL_PROVIDER: provider,
            AIAttributes.MODEL_NAME: model,
            "custom.attribute": "my-value",
            **extra,
        }
    )

# Usage
@my_custom_llm(provider="my-provider", model="my-model")
def generate(prompt: str) -> str:
    return my_llm.complete(prompt)

Example: Full Custom Implementation

app.py
from rhesis.sdk import RhesisClient, observe
from rhesis.sdk.telemetry import AIAttributes, AIEvents
from rhesis.sdk.telemetry.schemas import AIOperationType

client = RhesisClient(
    api_key="your-api-key",
    project_id="your-project-id",
    environment="development",
)

# Custom LLM decorator with all attributes
@observe(
    span_name=AIOperationType.LLM_INVOKE,
    **{
        AIAttributes.MODEL_PROVIDER: "my-provider",
        AIAttributes.MODEL_NAME: "my-model-v2",
        AIAttributes.LLM_TEMPERATURE: 0.8,
        AIAttributes.LLM_MAX_TOKENS: 1000,
        "custom.deployment": "us-east-1",
        "custom.version": "2.0",
    }
)
def generate_with_custom_model(prompt: str) -> str:
    response = my_model.complete(prompt)
    return response.text

Next: Learn about auto-instrumentation for zero-config tracing.