Skip to Content
DevelopmentBackendTest Run Status

Test Run Status

This guide explains how test run statuses are determined and what they represent in the Rhesis backend.

Overview

Test run statuses reflect execution completion, not test assertion results. A test run can be “Completed” even if all tests failed their assertions, as long as they executed successfully.

Status Types

COMPLETED

Definition: All tests executed successfully (regardless of pass/fail results)

When assigned:

  • execution_errors == 0 (all tests ran without errors)
  • At least one test was present in the run

Email status:

  • "success" if all tests passed
  • "failed" if any tests failed

Example scenarios:

completed-scenarios.txt
10 tests, 10 passed, 0 failed, 0 errors → COMPLETED (email: success)
10 tests, 7 passed, 3 failed, 0 errors → COMPLETED (email: failed)
10 tests, 0 passed, 10 failed, 0 errors → COMPLETED (email: failed)

PARTIAL

Definition: Incomplete execution - some tests executed, some couldn’t

When assigned:

  • 0 < execution_errors < total_tests
  • Mix of successfully executed tests and execution errors

Email status: "partial"

Example scenarios:

partial-scenarios.txt
10 tests, 5 passed, 3 failed, 2 errors → PARTIAL
10 tests, 8 passed, 0 failed, 2 errors → PARTIAL

FAILED

Definition: Run couldn’t complete - no tests OR all tests had execution errors

When assigned:

  • total_tests == 0 (no tests in the run), OR
  • execution_errors == total_tests (all tests errored)

Email status: "failed"

Example scenarios:

failed-scenarios.txt
0 tests, 0 passed, 0 failed, 0 errors → FAILED (no tests)
10 tests, 0 passed, 0 failed, 10 errors → FAILED (all errored)

PROGRESS

Definition: Test run is currently executing

When assigned:

  • Set when test execution starts
  • Replaced by COMPLETED/PARTIAL/FAILED when execution finishes

Status Determination Logic

The status is determined in the chord callback (collect_results) after all tests complete:

status-determination-logic.py
# 1. No tests at all → FAILED
if total_tests == 0:
    status = "Failed"

# 2. All tests executed → COMPLETED
elif execution_errors == 0:
    status = "Completed"

# 3. All tests errored → FAILED
elif execution_errors == total_tests:
    status = "Failed"

# 4. Mixed results → PARTIAL
else:
    status = "Partial"

Implementation: apps/backend/src/rhesis/backend/tasks/execution/result_processor.py


Key Distinctions

Test Run Status vs Test Result Status

AspectTest Run StatusTest Result Status
ScopeEntire test runIndividual test
PurposeDid tests execute?Did test pass assertions?
ValuesCOMPLETED, PARTIAL, FAILED, PROGRESSPass, Fail
ConsidersExecution errorsMetric success

Execution Errors

What are execution errors?

  • Tests that couldn’t execute due to technical issues
  • Missing test_metrics or empty metrics
  • Infrastructure/connection failures
  • Code errors during test execution

What are NOT execution errors?

  • Tests that executed but failed assertions
  • Metrics that returned is_successful: false
  • Expected test failures

Examples

Scenario 1: Successful Test Run

scenario-successful.txt
Total: 10 tests
Passed: 10
Failed: 0
Execution Errors: 0

Run Status: COMPLETED
Email Status: success
Reason: All tests executed and passed

Scenario 2: Tests Failed Assertions

scenario-failed-assertions.txt
Total: 10 tests
Passed: 7
Failed: 3
Execution Errors: 0

Run Status: COMPLETED
Email Status: failed
Reason: All tests executed (even though some failed)

Scenario 3: Incomplete Execution

scenario-incomplete.txt
Total: 10 tests
Passed: 5
Failed: 3
Execution Errors: 2

Run Status: PARTIAL
Email Status: partial
Reason: 2 tests couldn't execute

Scenario 4: Complete Failure

scenario-complete-failure.txt
Total: 10 tests
Passed: 0
Failed: 0
Execution Errors: 10

Run Status: FAILED
Email Status: failed
Reason: No tests could execute

Scenario 5: Empty Test Run

scenario-empty-run.txt
Total: 0 tests
Passed: 0
Failed: 0
Execution Errors: 0

Run Status: FAILED
Email Status: failed
Reason: No tests in the run

Statistics Calculation

Test statistics are calculated by analyzing test_metrics:

statistics-calculation.py
def get_test_statistics(test_run, db):
    for result in test_results:
        metrics = result.test_metrics.get('metrics', {})

        # Check if ALL metrics passed
        all_metrics_passed = all(
            metric.get('is_successful', False)
            for metric in metrics.values()
        )

        if all_metrics_passed:
            tests_passed += 1
        else:
            tests_failed += 1

Source of Truth: test_metrics.metrics[].is_successful

NOT Used: The status field in test results (historically unreliable)


Email Notifications

Email notifications show test counts based on the same logic:

  • Total Tests: Count of all test results
  • Tests Passed: Tests where ALL metrics passed
  • Tests Failed: Tests where ANY metric failed
  • Execution Errors: Tests with no/empty metrics (shown separately if > 0)

Template: apps/backend/src/rhesis/backend/notifications/email/templates/test_execution_summary.html.jinja2



Implementation Files

FilePurpose
tasks/execution/result_processor.pyStatus determination logic
tasks/enums.pyRunStatus enum definition
app/constants.pyStatus mapping constants
notifications/email/templates/test_execution_summary.html.jinja2Email template