Skip to Content
GlossaryPolyphemus - Glossary

Polyphemus

Back to GlossaryConfiguration

The Rhesis-hosted LLM service that provides access to open-source models with built-in access control, rate limiting, and benchmarking.

Overview

Polyphemus is the Rhesis-hosted LLM service, giving your team access to powerful open-source language models without requiring external API keys or managing your own inference infrastructure. It is tightly integrated with the Rhesis platform and SDK.

Key Features

Hosted Model Access: Polyphemus runs on Vertex AI with vLLM for efficient inference, providing fast and scalable model access across your organization.

Access Control: Access to Polyphemus is governed by a request/grant workflow. Users can request access and administrators can approve or revoke it through the platform UI.

Delegation Tokens: Rather than sharing primary credentials, Polyphemus uses service-level delegation tokens that grant scoped, revocable access for the SDK and backend services.

Rate Limiting: Built-in rate limiting prevents abuse and ensures fair usage across your team. Rate limiting applies after authentication, so unauthenticated requests are rejected before counting against limits.

Security Test Sets: Polyphemus includes OWASP-based security test sets for systematic LLM vulnerability testing.

Requesting Access

  1. Navigate to the Models section in the platform
  2. Find the Polyphemus model card
  3. Click "Request Access" to submit your request
  4. An administrator will review and grant or deny access
  5. Once granted, you can use Polyphemus as a model provider in test generation, evaluation, and Penelope execution

Using Polyphemus in the SDK

After access is granted, configure Polyphemus as your model provider:

python
from rhesis.sdk import RhesisClient

client = RhesisClient()
# Polyphemus is available as a named provider once access is granted

Benchmarking

Polyphemus was designed from the ground up for LLM benchmarking, offering modular benchmark suites that can be run against hosted or external models for performance comparison.

Best Practices

  • Request access before configuring test runs that depend on Polyphemus as the model provider
  • Use delegation tokens for service-to-service calls rather than sharing user credentials
  • Monitor rate limit usage during large test runs to avoid throttling at critical moments
  • Use Polyphemus's OWASP-based security test sets as a starting point for LLM vulnerability testing

Related Terms