Test Results & Traceability¶
Overview¶
This section demonstrates how OSQAr auto-imports test results into the documentation, establishing a complete compliance artifact chain from safety goals through implementation code to automated test reporting.
The traceability flow is:
ISO 26262 Safety Goal
↓
Safety Requirement (REQ_SAFETY_*)
↓
Functional Requirement (REQ_FUNC_*, ARCH_*)
↓
Implementation Code (src/* with requirement IDs)
↓
Unit Tests (tests/* with TEST_* IDs)
↓
JUnit XML Test Results
↓
Sphinx Auto-Import & HTML Report
↓
Compliance Artifact Package
Test Suite Execution¶
The test suite can be run locally to generate compliance artifacts:
# Pick one language example and run its end-to-end script.
# This generates JUnit XML + builds HTML docs with imported test results.
cd examples/<language>_hello_world
./build-and-test.sh
# The generated docs live under the example directory.
open _build/html/index.html
Test Configuration File¶
The test results are automatically imported via Sphinx configuration:
# conf.py configuration
extensions = [
'sphinx_needs', # Requirements traceability
'sphinxcontrib.test_reports', # Auto-import JUnit XML
'sphinxcontrib.plantuml', # Diagrams
]
# Point to the JUnit XML file
test_reports = ['test_results.xml']
This configuration tells Sphinx to parse test_results.xml and create a searchable, linked test report within the documentation.
Test Requirements Mapping¶
This section describes how each test requirement maps to implementation code and safety/functional requirements. All TEST_* needs are defined in the Verification & Test Plan document; this section provides the execution results and detailed traceability analysis.
Key Points:
TEST_CONVERSION_001 ((TEST) TEST_CONVERSION_001:... (TEST_CONVERSION_001)): Sensor Driver Tests
TEST_FILTER_001 ((TEST) TEST_FILTER_001: Noi... (TEST_FILTER_001)): Filter Noise Rejection
TEST_THRESHOLD_001 ((TEST) TEST_THRESHOLD_001: ... (TEST_THRESHOLD_001)): Threshold Detection
TEST_HYSTERESIS_001 ((TEST) TEST_HYSTERESIS_001:... (TEST_HYSTERESIS_001)): Hysteresis Deadband
TEST_END_TO_END_001 ((TEST) TEST_END_TO_END_001:... (TEST_END_TO_END_001)): End-to-End Latency
TEST_FAIL_SAFE_001 ((TEST) TEST_FAIL_SAFE_001: ... (TEST_FAIL_SAFE_001)): Fail-Safe on Persistent Errors
See Verification & Test Plan for detailed test case specifications and acceptance criteria.
Traceability Matrix¶
The following matrix demonstrates the complete traceability chain from requirements through code to tests. All IDs are clickable hyperlinks to requirement definitions:
Requirement ID |
Requirement Description |
Test Case(s) |
Code Implementation |
|---|---|---|---|
Prevent thermal damage to equipment |
(TEST) TEST_THRESHOLD_001: ... (TEST_THRESHOLD_001), (TEST) TEST_END_TO_END_001:... (TEST_END_TO_END_001) |
|
|
Detect overheat within 100ms |
|
||
Report safe state recovery reliably |
|
||
Convert 12-bit ADC to 0.1°C units |
|
||
Filter sensor noise (≥80% reduction) |
|
||
Detect 100°C threshold |
|
||
Apply 5°C hysteresis deadband |
|
||
Sensor driver component (100Hz sampling) |
5 class methods in |
||
Filter component (5-sample MA) |
3 class methods in |
||
State machine component (hysteresis) |
(TEST) TEST_THRESHOLD_001: ... (TEST_THRESHOLD_001), (TEST) TEST_HYSTERESIS_001:... (TEST_HYSTERESIS_001) |
4 class methods in |
|
Hysteresis state machine (100°C/95°C thresholds) |
|
||
Fail-safe error handling (10-error threshold) |
Error counter in |
Automated Test Reporting¶
The JUnit XML output from your test runner (pytest for the Python example, or a native runner for C/C++/Rust) is processed by Sphinx and rendered directly in this chapter.
Imported JUnit results¶
tsim_c¶
Tests: 4, Failures: 0, Errors: 0, Skips: 0
Time: 0.0
class |
name |
status |
reason |
|---|---|---|---|
tsim_c |
test_conversion_full_range |
passed |
|
tsim_c |
test_filter_noise_rejection |
passed |
|
tsim_c |
test_threshold_and_hysteresis |
passed |
|
tsim_c |
test_shared_magic_constant |
passed |
|
Notes:
The build workflow generates
test_results.xml.If tests are not executed, OSQAr generates a small placeholder file so docs builds stay robust.
Code Coverage¶
OSQAr supports embedding code coverage evidence alongside the JUnit test report.
If coverage tooling is available for the selected language example, the build workflow generates a
coverage_report.txtand embeds it here.If coverage tooling is not available, a placeholder report is generated so documentation builds remain robust.
(INFO) Reading coverage data...
(INFO) Writing coverage report...
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
src/tsim.c 45 44 97% 18
------------------------------------------------------------------------------
TOTAL 45 44 97%
------------------------------------------------------------------------------
lines: 97.8% (44 out of 45)
functions: 100.0% (5 out of 5)
branches: 69.2% (18 out of 26)
Complexity report¶
The build workflow generates a cyclomatic complexity report (complexity_report.txt) and embeds it here.
================================================
NLOC CCN token PARAM length location
------------------------------------------------
11 4 78 1 18 tsim_adc_to_temp_x10@15-32@src/tsim.c
9 3 58 1 9 tsim_filter_init@34-42@src/tsim.c
19 4 155 3 24 tsim_filter_update@44-67@src/tsim.c
6 2 39 3 6 tsim_sm_init@69-74@src/tsim.c
13 5 69 2 15 tsim_sm_evaluate@76-90@src/tsim.c
5 1 53 2 5 set_fail@20-24@tests/test_tsim.c
19 4 201 0 23 test_conversion_full_range@26-48@tests/test_tsim.c
25 7 186 0 33 test_filter_noise_rejection@50-82@tests/test_tsim.c
26 6 156 0 34 test_threshold_and_hysteresis@84-117@tests/test_tsim.c
11 2 69 0 14 test_shared_magic_constant@119-132@tests/test_tsim.c
27 7 188 3 31 write_junit@134-164@tests/test_tsim.c
19 4 141 2 23 main@166-188@tests/test_tsim.c
3 file analyzed.
==============================================================
NLOC Avg.NLOC AvgCCN Avg.token function_cnt file
--------------------------------------------------------------
60 11.6 3.6 79.8 5 src/tsim.c
25 0.0 0.0 0.0 0 include/tsim.h
142 18.9 4.4 142.0 7 tests/test_tsim.c
===============================================================================================================
No thresholds exceeded (cyclomatic_complexity > 10 or length > 1000 or nloc > 1000000 or parameter_count > 100)
==========================================================================================
Total nloc Avg.NLOC AvgCCN Avg.token Fun Cnt Warning cnt Fun Rt nloc Rt
------------------------------------------------------------------------------------------
227 15.8 4.1 116.1 12 0 0.00 0.00
Building Compliance Artifacts¶
The complete compliance artifact package is generated via:
# 1. Run tests, emit JUnit XML, build docs
cd examples/<language>_hello_world
./build-and-test.sh
# 2. Output contains:
# - Linked requirements and tests
# - Architecture diagrams with PlantUML
# - Test results integrated into HTML
# - Searchable traceability matrix
# - Compliance documentation suitable for assessment/audit
Compliance Artifact Checklist¶
Use this checklist to verify complete traceability:
[✓] Requirements documented with needs IDs (REQ_*, ARCH_*)
[✓] Architecture diagrams with PlantUML (SVG format)
[✓] Implementation code with requirement docstrings
[✓] Test suite with TEST_* IDs mapped to requirements
[✓] JUnit XML test results generated
[✓] Sphinx imports test results into documentation
[✓] HTML documentation includes traceability matrix
[✓] All requirements have ≥1 test case
[✓] All test cases linked to ≥1 requirement
[✓] No orphaned requirements (untested, unimplemented)
[✓] No orphaned tests (unlinked to requirements)
[✓] Build succeeds without errors
[✓] HTML documentation is searchable and indexed
Next Steps¶
Configure CI/CD: Add GitHub Actions to auto-run tests and rebuild documentation on commits
Add Domain Examples: Create medical_device, automotive, robotics subdirectories with domain-specific requirements
Extend Test Coverage: Add performance benchmarks, fault injection tests, environmental stress tests
Implement Requirements Gateway: Create automated checks that fail builds if requirements lack tests