Model Entity
The Model entity represents an LLM configuration stored in the Rhesis platform. Models contain the provider, model name, and API key needed to connect to LLM services. Once saved, models can be set as defaults for test generation or evaluation, and can be converted to ready-to-use LLM instances.
Note: The Model entity is different from the Models module. The entity stores configurations in the platform, while the module provides LLM client implementations for making API calls.
Creating a Model
Create a model configuration using a provider name (the SDK automatically resolves it to the correct provider type):
Supported Providers
| Provider | Description |
|---|---|
openai | OpenAI models (GPT-4, GPT-3.5, etc.) |
anthropic | Anthropic Claude models |
gemini | Google Gemini models |
mistral | Mistral AI models |
cohere | Cohere models |
groq | Groq inference |
vertex_ai | Google Vertex AI |
together_ai | Together AI models |
replicate | Replicate hosted models |
perplexity | Perplexity AI |
ollama | Local Ollama models |
vllm | Self-hosted vLLM |
List all available providers:
Fetching Models
Setting Default Models
Set a model as the default for test generation or evaluation tasks. This updates your user settings.
Converting to LLM Instance
Convert a Model entity to a ready-to-use LLM instance for making API calls:
This is useful when you want to store model configurations centrally and retrieve them for use in different scripts or applications.
Saving LLM Configurations
You can also save an existing LLM instance to the platform:
Complete Workflow Example
Model Fields
| Field | Type | Description |
|---|---|---|
id | str | Unique identifier (set after push) |
name | str | Human-readable name |
description | str | Optional description (auto-generated if not provided) |
provider | str | Provider name (e.g., “openai”, “anthropic”) |
model_name | str | Model identifier (e.g., “gpt-4”, “claude-3-opus-20240229”) |
key | str | API key for the provider |
provider_type_id | str | Auto-resolved from provider name |
status_id | str | Optional status reference |
Next Steps
- Learn about LLM providers for direct model usage
- Use models with Synthesizers for test generation
- Configure Metrics with model-based evaluation