Models
Models are core components of the SDK that enable you to create and manage test sets for Gen AI applications. Models also play a crucial role in the evaluation process, serving as LLM judges to assess and validate outputs.
Using the get_model function
The easiest way to use models is through the get_model function. When called without any arguments, this function returns the default Rhesis model.
To use a different provider, you can pass the provider name as an argument. This will use the default model for that provider.
Supported providers:
gemini- Google’s AI models with multimodal supportopenai- OpenAI models including GPT-4huggingface- Open-source models from Hugging Faceollama- Local model execution using Ollamarhesis- Models served by Rhesis
To use a specific model, provide its name in the format provider/model_name:
The above code is equivalent to:
Direct import
Alternatively, you can access models by importing the model class directly. When you provide a model name as an argument, that specific model will be used. If no model name is provided, the default model for that provider will be used.
Generate content with models
All models share a consistent interface. The primary function is generate, which accepts:
prompt: The text prompt for generationschema: (optional) A Pydantic schema defining the structure of the generated text
Generate text using prompt only:
Generate structured output using schemas:
Using models with synthesizers and metrics
Models become most useful when combined with synthesizers and metrics: