Skip to Content
GlossaryConnector - Glossary

Connector

Back to GlossaryDevelopment Tools

The SDK integration layer that connects your LLM application to Rhesis via the @endpoint decorator, enabling automatic test execution and tracing.

Also known as: SDK connector, @endpoint decorator

Overview

The Rhesis Connector is the bridge between your existing LLM application and the Rhesis testing platform. By adding a single decorator to your application's entry point, you connect it to Rhesis in under a minute—no API clients to configure, no request/response mapping to write by hand.

The @endpoint Decorator

The decorator is the primary way to register your LLM application with Rhesis:

python
from rhesis.sdk.decorators import endpoint

@endpoint
def my_chatbot(input: str) -> str:
      response = openai_client.chat.completions.create(
          model="gpt-4",
          messages=[{"role": "user", "content": input}]
      )
      return response.choices[0].message.content

Once decorated, Rhesis can:

  • Send test inputs directly to your function
  • Capture the output for evaluation
  • Record a full trace of the execution
  • Evaluate the output against your metrics

Dependency Injection with bind

If your function requires dependencies (database connections, config objects, etc.), use the parameter to inject them without polluting the function signature:

python
@endpoint(bind={"db": my_database_connection})
def my_chatbot(input: str, db=None) -> str:
      context = db.fetch_context(input)
      return generate_response(input, context)

The @observe Decorator

For tracing internal functions, use :

python
from rhesis.sdk.decorators import observe

@observe
def retrieve_documents(query: str) -> list[str]:
      # This function's I/O will appear as a child span
      return vector_db.search(query)

Bidirectional Flow

The connector is bidirectional: Rhesis sends test inputs to your function and receives outputs back for evaluation. For multi-turn tests, the connector also handles conversation state, passing the conversation ID between turns automatically.

Auto-Mapping

The connector includes intelligent auto-mapping that automatically detects common input and output field names, reducing the configuration needed for most LLM application patterns.

Best Practices

  • Test your endpoint connection before running a full test set to catch configuration issues early
  • Use on expensive sub-functions to trace their latency and token usage
  • Keep the decorated function's signature simple and inject dependencies via the parameter
  • Verify that the connector correctly handles your application's conversation ID field for multi-turn tests

Documentation

Related Terms