Skip to Content
SDKModels

Models

Models are core components of the SDK that enable you to create and manage test sets for Gen AI applications. Models also play a crucial role in the evaluation process, serving as LLM judges to assess and validate outputs.

Using the get_model function

The easiest way to use models is through the get_model function. When called without any arguments, this function returns the default Rhesis model.

default_model.py
from rhesis.sdk.models import get_model

model = get_model()

To use a different provider, you can pass the provider name as an argument. This will use the default model for that provider.

provider_model.py
from rhesis.sdk.models import get_model

# Use default Gemini model
model = get_model("gemini")

Supported providers:

  • gemini - Google’s AI models with multimodal support
  • openai - OpenAI models including GPT-4
  • huggingface - Open-source models from Hugging Face
  • ollama - Local model execution using Ollama
  • rhesis - Models served by Rhesis

To use a specific model, provide its name in the format provider/model_name:

specific_model.py
from rhesis.sdk.models import get_model

model = get_model("gemini/gemini-2.0-flash")

The above code is equivalent to:

specific_model_alt.py
from rhesis.sdk.models import get_model

model = get_model(provider="gemini", model_name="gemini-2.0-flash")

Direct import

Alternatively, you can access models by importing the model class directly. When you provide a model name as an argument, that specific model will be used. If no model name is provided, the default model for that provider will be used.

direct_import.py
from rhesis.sdk.models import GeminiLLM, OllamaLLM

# Use default Ollama model
ollama_model = OllamaLLM()

# Use specific Gemini model
gemini_model = GeminiLLM('gemini-2.0-flash')

Generate content with models

All models share a consistent interface. The primary function is generate, which accepts:

  • prompt: The text prompt for generation
  • schema: (optional) A Pydantic schema defining the structure of the generated text

Generate text using prompt only:

generate_text.py
# Use default Rhesis model
model = get_model()
output = model.generate(prompt="What is the capital of France?")
# Output: "The capital of France is Paris."

Generate structured output using schemas:

generate_structured.py
from pydantic import BaseModel
from rhesis.sdk.models import get_model

class City(BaseModel):
    name: str
    population: int

class CityResponse(BaseModel):
    biggest_cities: list[City]

# Use default Rhesis model
model = get_model()
output = model.generate(
    prompt="The list of 5 biggest cities in Germany?",
    schema=CityResponse
)

Using models with synthesizers and metrics

Models become most useful when combined with synthesizers and metrics:

models_with_tools.py
from rhesis.sdk.models import get_model
from rhesis.sdk.synthesizers import PromptSynthesizer
from rhesis.sdk.metrics import RhesisPromptMetricNumeric

# With synthesizers
model = get_model("gemini")
synthesizer = PromptSynthesizer(
    prompt="Generate tests for the car selling chatbot",
    model=model,
)

# With metrics
metric = RhesisPromptMetricNumeric(
    name="answer_quality_evaluator",
    evaluation_prompt="Evaluate the answer for accuracy, completeness, clarity, and relevance.",
    model="gemini",
)