Skip to Content
DocsModels

Models

What are Models? Models are AI service providers that you can set up for your evaluation and testing workflows.

Models in Rhesis are used for two primary purposes:

  • Test Generation: Automatically generate test cases based on your requirements (see Tests)
  • Evaluation (LLM as Judge): Evaluate AI responses using metrics powered by LLMs (see Metrics)

You can configure default models for each purpose in the model settings, or select specific models when creating individual metrics or test cases.

Models Overview

Default Model

Rhesis provides a default managed model that requires no configuration:

  • Rhesis Default: Default Rhesis-hosted model
    • No API key required
    • Pre-configured for both generation and evaluation
    • Ready to use immediately

Supported Providers

Supported Providers

Rhesis supports connections to major AI providers including:

  • Anthropic - Claude provider
  • Azure AI Studio - Azure-hosted model endpoints (azure_ai)
  • Azure OpenAI - Azure OpenAI deployments (azure)
  • Cohere - Command R provider
  • Google - Gemini provider
  • Groq - LPU-based model hosting
  • LiteLLM Proxy - OpenAI-compatible proxy gateway
  • Meta - Llama provider
  • Mistral - Mistral provider
  • OpenAI - OpenAI provider
  • Perplexity - Labs model API
  • Polyphemus - Adversarial testing model (restricted access workflow)
  • Replicate - Model hosting provider
  • Together AI - Multi-model provider

Connecting a Model Provider

  1. Click “Add Model” on the Models page
  2. Select a provider (e.g., Google Gemini, OpenAI, Anthropic)
  3. Fill in the required fields:
    • Connection Name: Unique identifier for this connection
    • Model Name: Specific model to use (e.g., “gemini-1.5-pro”, “gpt-4-turbo”)
    • API Key: Your API key from the provider’s dashboard
  4. (Optional) Configure additional settings:
    • Custom Headers: Add HTTP headers required for your API calls
    • Default for Test Generation: Use this model when generating test cases
    • Default for Evaluation: Use this model for metrics evaluation
  5. Click “Test Connection” to verify your configuration
  6. Click “Save” to add the model

Provider-Specific Connection Fields

Some providers require endpoint configuration in addition to model name:

ProviderProvider valueAPI Endpoint requiredAPI key requiredNotes
LiteLLM Proxylitellm_proxyYesOptionalPre-filled with http://0.0.0.0:4000 by default
Azure AI Studioazure_aiYesYesUse your Azure AI inference endpoint URL
Azure OpenAIazureYesYesUse your Azure OpenAI resource endpoint URL

If a model connection fails, use Test Connection in the connection dialog before saving.

Polyphemus Access Control (v0.6.6+)

Polyphemus uses an explicit request-and-review workflow. If access is not granted yet, the model card shows Access Required and a Request Access action.

Request Workflow

  1. Open Models and click Request Access on the Polyphemus card.
  2. Submit request details in the access modal.
  3. The request is recorded and review notification is sent.
  4. After approval, your account is marked as verified and the model becomes available.

Request Fields

FieldRules
justificationRequired, 10-2000 characters
expected_monthly_requestsRequired, integer from 0 to 10,000

Request state is stored in user settings under polyphemus_access (for example requested_at and revoked_at) together with user verification status.


Next Steps - Create Metrics using your connected models - Generate Tests with AI-powered test generation