Skip to Content
GlossaryFallback Behavior - Glossary

Fallback Behavior

Back to GlossaryTesting

How an AI system responds when it cannot understand, answer, or fulfill a user's request, providing a default or alternative response.

Also known as: default response, cannot help response

Overview

Fallback behavior determines what your AI does when it encounters situations it can't handle: unclear inputs, questions it can't answer, or requests it can't fulfill. Good fallback behavior maintains user trust and provides value even when the ideal response isn't possible.

Types of Fallback Scenarios

Unclear input scenarios occur when users provide ambiguous or vague questions, submit malformed or nonsensical requests, ask extremely unclear questions, or send mixed or garbled text that the system can't parse. Rather than guessing what users mean, effective fallback behavior asks for clarification.

Knowledge gaps emerge when users ask questions outside your training data, request information on topics beyond your configured scope, need real-time information the system can't access, or query highly specialized or niche topics. Your fallback should acknowledge these limitations honestly rather than attempting to fabricate answers.

Capability limits arise when users request actions your system can't perform, ask for features that don't exist, attempt operations the system isn't designed for, or request out-of-scope functionality. Effective fallback explains what the system can and cannot do, helping users understand boundaries.

Testing Fallback Behavior

Fallback quality metrics evaluate how well your system handles failures. Measure whether responses acknowledge limitations appropriately, provide helpful alternatives, maintain appropriate tone, and avoid hallucinating information. Track what percentage of fallback responses leave users with actionable next steps rather than dead ends.

Generating fallback tests involves creating scenarios with intentionally difficult inputs that push your system's boundaries. Design test cases with ambiguous questions, out-of-scope requests, nonsensical input, and edge cases. Categorize these tests by failure type so you can systematically verify that each category receives appropriate fallback responses.

Good Fallback Patterns

The clarification request pattern works well when input is unclear but potentially valid. Rather than refusing or guessing, the system asks specific questions to understand what the user needs. For example, if a request is ambiguous, the system might offer two or three interpretations and ask which the user intended.

The honest limitation plus alternative pattern admits when the system can't fulfill a request while offering related help. If asked about real-time stock prices, the system might explain it doesn't have access to live data but can discuss historical trends or explain where to find current prices.

The partial information pattern provides whatever help is possible while acknowledging gaps. If a user asks about five products but information is only available for three, the system provides details on those three and explains why information on the other two isn't available.

Poor Fallback Patterns

Unhelpful decline is an anti-pattern where the system simply says "I can't help" without explanation or alternatives. This leaves users frustrated with no path forward. Even when declining, provide context about why and suggest what the user might try instead.

Hallucinated responses represent a serious anti-pattern where the system makes up information rather than admitting limitations. This is worse than no response because it provides false information that users might rely on, potentially causing real harm.

Excessive apologies waste the user's time and don't provide value. Repeatedly saying "I'm sorry" without offering alternatives or explanations frustrates users. Acknowledge limitations briefly and focus on what you can do to help.

Testing with Penelope

Penelope helps test fallback behavior through goal-oriented conversations that naturally encounter failures. As Penelope pursues goals, it will probe edge cases, ask challenging questions, and attempt actions that might exceed your system's capabilities. This reveals whether your fallback behavior maintains conversational flow or creates dead ends.

Fallback Decision Tree

Establishing clear decision flows for different failure scenarios ensures consistent, appropriate responses. If input is unclear, request clarification. If the question is outside scope but related, provide partial help and explain limitations. If the request is completely out of scope, explain boundaries and suggest alternatives. If technical errors occur, communicate the issue and provide fallback options. Having explicit logic for each scenario prevents inconsistent fallback behavior.

Best Practices

For response quality, be honest by admitting limitations clearly without hedging or making excuses. Explain why you can't help so users understand the limitation and can formulate better requests. Offer alternatives by providing related help or next steps even when you can't fulfill the original request. Stay helpful by maintaining a positive, professional tone even when declining. Avoid hallucination by never making up information—it's always better to admit you don't know.

For comprehensive test coverage, evaluate how your system handles ambiguous or vague requests to verify it asks for clarification appropriately. Test questions at your knowledge boundaries to ensure the system acknowledges what it doesn't know. Verify that capability limits are enforced correctly with clear explanations of what the system can't do. Include unusual or unexpected scenarios that might not fit standard categories. Conduct stress tests with nonsensical or malformed inputs to ensure the system fails gracefully rather than producing errors or nonsensical responses.

For monitoring and improvement, track how often fallbacks are triggered to understand the rate at which your system encounters situations it can't handle. Identify patterns in what causes most fallbacks, revealing opportunities to expand capabilities. Monitor user satisfaction by examining whether fallbacks help users move forward or leave them frustrated. Use fallback patterns to identify improvement opportunities, focusing capability expansion on the most common limitation scenarios.

Documentation

Related Terms