Models
What are Models? Models are AI service providers that you can set up for your evaluation and testing workflows.
Models in Rhesis are used for two primary purposes:
- Test Generation: Automatically generate test cases based on your requirements (see Tests)
- Evaluation (LLM as Judge): Evaluate AI responses using metrics powered by LLMs (see Metrics)
You can configure default models for each purpose in the model settings, or select specific models when creating individual metrics or test cases.

Default Model
Rhesis provides a default managed model that requires no configuration:
- Rhesis Default: Default Rhesis-hosted model
- No API key required
- Pre-configured for both generation and evaluation
- Ready to use immediately
Supported Providers
Supported Providers
Rhesis supports connections to major AI providers including:
- Anthropic - Claude provider
- Azure AI Studio - Azure-hosted model endpoints (
azure_ai) - Azure OpenAI - Azure OpenAI deployments (
azure) - Cohere - Command R provider
- Google - Gemini provider
- Groq - LPU-based model hosting
- LiteLLM Proxy - OpenAI-compatible proxy gateway
- Meta - Llama provider
- Mistral - Mistral provider
- OpenAI - OpenAI provider
- Perplexity - Labs model API
- Polyphemus - Adversarial testing model (restricted access workflow)
- Replicate - Model hosting provider
- Together AI - Multi-model provider
Connecting a Model Provider
- Click “Add Model” on the Models page
- Select a provider (e.g., Google Gemini, OpenAI, Anthropic)
- Fill in the required fields:
- Connection Name: Unique identifier for this connection
- Model Name: Specific model to use (e.g., “gemini-1.5-pro”, “gpt-4-turbo”)
- API Key: Your API key from the provider’s dashboard
- (Optional) Configure additional settings:
- Custom Headers: Add HTTP headers required for your API calls
- Default for Test Generation: Use this model when generating test cases
- Default for Evaluation: Use this model for metrics evaluation
- Click “Test Connection” to verify your configuration
- Click “Save” to add the model
Provider-Specific Connection Fields
Some providers require endpoint configuration in addition to model name:
| Provider | Provider value | API Endpoint required | API key required | Notes |
|---|---|---|---|---|
| LiteLLM Proxy | litellm_proxy | Yes | Optional | Pre-filled with http://0.0.0.0:4000 by default |
| Azure AI Studio | azure_ai | Yes | Yes | Use your Azure AI inference endpoint URL |
| Azure OpenAI | azure | Yes | Yes | Use your Azure OpenAI resource endpoint URL |
If a model connection fails, use Test Connection in the connection dialog before saving.
Polyphemus Access Control (v0.6.6+)
Polyphemus uses an explicit request-and-review workflow. If access is not granted yet, the model card shows Access Required and a Request Access action.
Request Workflow
- Open Models and click Request Access on the Polyphemus card.
- Submit request details in the access modal.
- The request is recorded and review notification is sent.
- After approval, your account is marked as verified and the model becomes available.
Request Fields
| Field | Rules |
|---|---|
justification | Required, 10-2000 characters |
expected_monthly_requests | Required, integer from 0 to 10,000 |
Request state is stored in user settings under polyphemus_access
(for example requested_at and revoked_at) together with user verification status.