Running Tests
A test run records the execution of test cases against a specific firmware build.
Creating a Test Run
- Go to your project → Test Runs → New Run
- Configure the run:
- Name — e.g.
Sprint 12 Regression - Firmware Version — the build being tested, e.g.
v2.3.1-rc1 - Hardware Variant — if applicable
- Assigned To — the engineer executing the run
- Name — e.g.
- Select which test cases to include (all, by category, or manual selection)
- Click Create Run
Recording Results
For each test case in the run, set the status:
| Status | Meaning |
|---|---|
| Pass | Test case executed successfully, expected result observed |
| Fail | Expected result not observed; defect likely |
| Blocked | Could not execute — missing hardware, environment issue, etc. |
| Not Run | Skipped intentionally for this run |
For failures, you should:
- Add an observation — what actually happened
- Optionally attach a screenshot or log file
- Create or link a defect
Automated Result Upload
If your test framework can make HTTP requests, you can upload results programmatically as tests execute:
# Upload a test result
curl -X POST https://testmetrix.de/api/test-results/ \
-H "Authorization: Token YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"run_id": 42,
"test_case_id": 7,
"status": "pass",
"duration_ms": 234,
"observation": "CAN frame received at t=48ms"
}'
See the API Reference for the full schema.
Closing a Run
When all test cases have been executed (or intentionally skipped), close the run:
- Go to the run → Actions → Close Run
- TESTMETRIX calculates the final pass rate and updates the coverage report
Closed runs are read-only and contribute to the project's historical trend charts.
Reopening a Run
If additional test execution is needed, closed runs can be reopened from Actions → Reopen Run.