Skip to Content
DocsTracingSemantic Conventions

Semantic Conventions

Rhesis uses a semantic layer for consistent, framework-agnostic span naming across all AI operations.

Semantic conventions answer a specific question: what actually happened inside this span? Rather than naming spans after framework constructs like chains, pipelines, or agents, Rhesis names spans after the primitive operation they perform — an LLM call, a tool execution, a retrieval. This makes traces readable regardless of which framework or orchestration layer produced them, and lets Rhesis correctly interpret, visualize, and evaluate spans across different stacks.

The same span from LangChain, LlamaIndex, or a custom implementation looks identical in the trace viewer as long as it follows the convention.

Naming Pattern

All span names follow the pattern: ai.<domain>.<action>

Valid Span Names

Primitive Operations

Span NameConstantDescription
ai.llm.invokeAIOperationType.LLM_INVOKELLM API call
ai.tool.invokeAIOperationType.TOOL_INVOKETool/function execution
ai.retrievalAIOperationType.RETRIEVALInformation retrieval
ai.embedding.generateAIOperationType.EMBEDDING_GENERATEGenerate embeddings
ai.rerankAIOperationType.RERANKReranking operation
ai.evaluationAIOperationType.EVALUATIONEvaluation operation
ai.guardrailAIOperationType.GUARDRAILSafety check
ai.transformAIOperationType.TRANSFORMData transformation

Agent Operations

Span NameConstantDescription
ai.agent.invokeAIOperationType.AGENT_INVOKEAgent execution
ai.agent.handoffAIOperationType.AGENT_HANDOFFTransition between agents

Using Constants

app.py
from rhesis.sdk.telemetry.schemas import AIOperationType

# Primitive operations
AIOperationType.LLM_INVOKE          # "ai.llm.invoke"
AIOperationType.TOOL_INVOKE         # "ai.tool.invoke"
AIOperationType.RETRIEVAL           # "ai.retrieval"
AIOperationType.EMBEDDING_GENERATE  # "ai.embedding.generate"
AIOperationType.RERANK              # "ai.rerank"
AIOperationType.EVALUATION          # "ai.evaluation"
AIOperationType.GUARDRAIL           # "ai.guardrail"
AIOperationType.TRANSFORM           # "ai.transform"

# Agent operations
AIOperationType.AGENT_INVOKE        # "ai.agent.invoke"
AIOperationType.AGENT_HANDOFF       # "ai.agent.handoff"

Forbidden Span Names

Framework composition concepts are rejected with HTTP 422:

Invalid NameReason
ai.chain.execute”Chain” is an orchestration pattern
ai.workflow.start”Workflow” is a composition
ai.pipeline.process”Pipeline” is infrastructure

Note: ai.agent.* spans are valid — see Agent Operations above.

Attribute Constants

Use AIAttributes for span attributes:

app.py
from rhesis.sdk.telemetry.attributes import AIAttributes

span.set_attribute(AIAttributes.MODEL_PROVIDER, "openai")
span.set_attribute(AIAttributes.MODEL_NAME, "gpt-4")
span.set_attribute(AIAttributes.LLM_TOKENS_INPUT, 150)
span.set_attribute(AIAttributes.LLM_TOKENS_OUTPUT, 200)

Model Attributes

ConstantKeyDescription
MODEL_PROVIDERai.model.providerProvider (openai, anthropic)
MODEL_NAMEai.model.nameModel identifier (gpt-4)

LLM Attributes

ConstantKeyDescription
LLM_TOKENS_INPUTai.llm.tokens.inputInput token count
LLM_TOKENS_OUTPUTai.llm.tokens.outputOutput token count
LLM_TOKENS_TOTALai.llm.tokens.totalTotal token count
LLM_TEMPERATUREai.llm.temperatureTemperature parameter
LLM_MAX_TOKENSai.llm.max_tokensMax tokens parameter

Tool Attributes

ConstantKeyDescription
TOOL_NAMEai.tool.nameName of the tool
TOOL_TYPEai.tool.typeType (http, function, database)

Retrieval Attributes

ConstantKeyDescription
RETRIEVAL_BACKENDai.retrieval.backendBackend (pinecone, weaviate)
RETRIEVAL_TOP_Kai.retrieval.top_kNumber of results

Embedding Attributes

ConstantKeyDescription
EMBEDDING_MODELai.embedding.modelModel name
EMBEDDING_VECTOR_SIZEai.embedding.vector.sizeVector dimensions

Agent Attributes

ConstantKeyDescription
AGENT_NAMEai.agent.nameName of the agent
AGENT_HANDOFF_FROMai.agent.handoff.fromAgent initiating the handoff
AGENT_HANDOFF_TOai.agent.handoff.toAgent receiving the handoff
AGENT_INPUT_CONTENTai.agent.inputAgent input
AGENT_OUTPUT_CONTENTai.agent.outputAgent output

Operation Type Values

ConstantValueDescription
OPERATION_LLM_INVOKEllm.invokeLLM operation
OPERATION_TOOL_INVOKEtool.invokeTool operation
OPERATION_RETRIEVALretrievalRetrieval operation
OPERATION_EMBEDDING_CREATEembedding.createEmbedding operation
OPERATION_RERANKrerankRerank operation
OPERATION_EVALUATIONevaluationEvaluation operation
OPERATION_GUARDRAILguardrailGuardrail operation
OPERATION_TRANSFORMtransformTransform operation
OPERATION_AGENT_INVOKEagent.invokeAgent invocation
OPERATION_AGENT_HANDOFFagent.handoffAgent handoff

Events

Use AIEvents for span events:

app.py
from rhesis.sdk.telemetry.attributes import AIEvents, AIAttributes

with tracer.start_as_current_span("ai.llm.invoke") as span:
    # Prompt event
    span.add_event(
        name=AIEvents.PROMPT,
        attributes={
            AIAttributes.PROMPT_ROLE: "user",
            AIAttributes.PROMPT_CONTENT: prompt_text,
        }
    )

    response = llm.invoke(prompt_text)

    # Completion event
    span.add_event(
        name=AIEvents.COMPLETION,
        attributes={
            AIAttributes.COMPLETION_CONTENT: response.text,
        }
    )

Event Names

ConstantValueDescription
AIEvents.PROMPTai.promptPrompt sent to LLM
AIEvents.COMPLETIONai.completionLLM completion
AIEvents.TOOL_INPUTai.tool.inputTool input
AIEvents.TOOL_OUTPUTai.tool.outputTool output
AIEvents.RETRIEVAL_QUERYai.retrieval.queryRetrieval query
AIEvents.RETRIEVAL_RESULTSai.retrieval.resultsRetrieval results
AIEvents.AGENT_INPUTai.agent.inputAgent input
AIEvents.AGENT_OUTPUTai.agent.outputAgent output

Trace Hierarchy

Spans nest inside each other to represent how operations compose at runtime. The examples below show two common shapes — a RAG pipeline and a multi-agent system.

Single-Agent (RAG)

A retrieval-augmented generation flow: the user query is first embedded, the embedding is used to retrieve relevant context, and then a single LLM call synthesizes the final response.

Multi-Agent

An orchestrator agent plans and delegates. When it needs a specialist, it hands off via ai.agent.handoff — the specialist then runs its own set of operations independently.


Next: Learn about auto-instrumentation for zero-config tracing.