Test Run Status
This guide explains how test run statuses are determined and what they represent in the Rhesis backend.
Overview
Test run statuses reflect execution completion, not test assertion results. A test run can be “Completed” even if all tests failed their assertions, as long as they executed successfully.
Status Types
COMPLETED
Definition: All tests executed successfully (regardless of pass/fail results)
When assigned:
execution_errors == 0(all tests ran without errors)- At least one test was present in the run
Email status:
"success"if all tests passed"failed"if any tests failed
Example scenarios:
PARTIAL
Definition: Incomplete execution - some tests executed, some couldn’t
When assigned:
0 < execution_errors < total_tests- Mix of successfully executed tests and execution errors
Email status: "partial"
Example scenarios:
FAILED
Definition: Run couldn’t complete - no tests OR all tests had execution errors
When assigned:
total_tests == 0(no tests in the run), ORexecution_errors == total_tests(all tests errored)
Email status: "failed"
Example scenarios:
PROGRESS
Definition: Test run is currently executing
When assigned:
- Set when test execution starts
- Replaced by COMPLETED/PARTIAL/FAILED when execution finishes
Status Determination Logic
The status is determined in the chord callback (collect_results) after all tests complete:
Implementation: apps/backend/src/rhesis/backend/tasks/execution/result_processor.py
Key Distinctions
Test Run Status vs Test Result Status
| Aspect | Test Run Status | Test Result Status |
|---|---|---|
| Scope | Entire test run | Individual test |
| Purpose | Did tests execute? | Did test pass assertions? |
| Values | COMPLETED, PARTIAL, FAILED, PROGRESS | Pass, Fail |
| Considers | Execution errors | Metric success |
Execution Errors
What are execution errors?
- Tests that couldn’t execute due to technical issues
- Missing test_metrics or empty metrics
- Infrastructure/connection failures
- Code errors during test execution
What are NOT execution errors?
- Tests that executed but failed assertions
- Metrics that returned
is_successful: false - Expected test failures
Examples
Scenario 1: Successful Test Run
Scenario 2: Tests Failed Assertions
Scenario 3: Incomplete Execution
Scenario 4: Complete Failure
Scenario 5: Empty Test Run
Statistics Calculation
Test statistics are calculated by analyzing test_metrics:
Source of Truth: test_metrics.metrics[].is_successful
NOT Used: The status field in test results (historically unreliable)
Email Notifications
Email notifications show test counts based on the same logic:
- Total Tests: Count of all test results
- Tests Passed: Tests where ALL metrics passed
- Tests Failed: Tests where ANY metric failed
- Execution Errors: Tests with no/empty metrics (shown separately if > 0)
Template: apps/backend/src/rhesis/backend/notifications/email/templates/test_execution_summary.html.jinja2
Related Documentation
- Test Result Status - How individual test statuses are determined
- Test Result Stats - Statistics APIs
- Background Tasks - Test execution flow
- Email Notifications - Email system
Implementation Files
| File | Purpose |
|---|---|
tasks/execution/result_processor.py | Status determination logic |
tasks/enums.py | RunStatus enum definition |
app/constants.py | Status mapping constants |
notifications/email/templates/test_execution_summary.html.jinja2 | Email template |