Skip to main content

Running Tests

A test run records the execution of test cases against a specific firmware build.

Creating a Test Run

  1. Go to your project → Test RunsNew Run
  2. Configure the run:
    • Name — e.g. Sprint 12 Regression
    • Firmware Version — the build being tested, e.g. v2.3.1-rc1
    • Hardware Variant — if applicable
    • Assigned To — the engineer executing the run
  3. Select which test cases to include (all, by category, or manual selection)
  4. Click Create Run

Recording Results

For each test case in the run, set the status:

StatusMeaning
PassTest case executed successfully, expected result observed
FailExpected result not observed; defect likely
BlockedCould not execute — missing hardware, environment issue, etc.
Not RunSkipped intentionally for this run

For failures, you should:

  1. Add an observation — what actually happened
  2. Optionally attach a screenshot or log file
  3. Create or link a defect

Automated Result Upload

If your test framework can make HTTP requests, you can upload results programmatically as tests execute:

# Upload a test result
curl -X POST https://testmetrix.de/api/test-results/ \
-H "Authorization: Token YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"run_id": 42,
"test_case_id": 7,
"status": "pass",
"duration_ms": 234,
"observation": "CAN frame received at t=48ms"
}'

See the API Reference for the full schema.

Closing a Run

When all test cases have been executed (or intentionally skipped), close the run:

  1. Go to the run → ActionsClose Run
  2. TESTMETRIX calculates the final pass rate and updates the coverage report

Closed runs are read-only and contribute to the project's historical trend charts.

Reopening a Run

If additional test execution is needed, closed runs can be reopened from ActionsReopen Run.