Objective
The aim of the testing and acceptance phase is to verify that each component and sub-system as well as the fully integrated system has been built according to the requirements defined during earlier stages and is ready for deployment.
Verification testing
Description
This activity encompasses structured testing from the bottom-up, beginning with individual modules (eg sensors, software logic), progressing through subsystem integration (eg communications, control units), and culminating in full-system integration testing (eg roadside and in-vehicle coordination). Testing is staged and controlled, often conducted in lab environments, test tracks, or mock-up conditions. The goal is to ensure that:
- Each requirement is correctly implemented.
- Interfaces between components work as intended.
- System behaviour is consistent and safe under expected conditions.
Key activities
- Unit testing. Verify individual hardware/software modules meet their technical and performance specifications.
- Sub-system and interface testing. Assess integration of major subsystems (eg sensors + controller; user interface + alert system). Address any data flow, logic, and timing issues.
- System integration testing. Conduct end-to-end functional testing in a controlled setting. Simulate various operational scenarios including degraded modes, power failures, or false positives.
Human Factors contributions to verification
Ensure that Human Factors requirements are traceably verified, and that usability and safety expectations are demonstrably met through appropriate test methods.
- Trace Human Factors requirements. Confirm each Human Factors requirement has a corresponding test case or observation method.
- Participate in scenario-based testing. Participate in component, sub-system, and end-to-end tests where human use is relevant. Check that the system elicits responses in line with specified performance metrics for relevant requirements.
- Identify emerging Human Factors issues. Flag inconsistencies or confusion introduced by technical integration (eg mismatched alerts, ambiguous status displays). The Human Factors Risk Assessment Prompts may provide a useful reference.
- Support interface evaluations. Confirm user interface behaviour aligns with expectations and supports intuitive operator responses.
User acceptance testing
Description
User acceptance testing is the final step in the pre-deployment testing lifecycle. It confirms that the system performs as intended and is acceptable for initial rollout. This phase is typically conducted with full functionality enabled but may use restricted or staged conditions (eg trials at limited level crossing sites, dry runs). User Acceptance Testing does not replace full validation (which occurs post-deployment) but provides confidence to proceed to operational deployment.
Key activities
- Define acceptance criteria. Confirm with stakeholders what “acceptable” looks like (eg alert recognisability, false alarm thresholds, safe fallback behaviour).
- Identify scenarios. Engage representative users to interact with the system under a set of scenarios representing various conditions (ie normal, degraded and emergency states). Capture usability, understanding, and safety concerns.
- Document readiness and approvals. Record findings, gain stakeholder approvals, and any conditional deployment requirements / additional controls (eg further training, fallback procedures).
Human Factors contributions to user acceptance testing
- Develop Human Factors-related test criteria. Include measures such as comprehension, trust, and response time (for further information see the Human Factors Guidance for Evaluating Innovative Level Crossing Technologies).
- Observe and evaluate user behaviour against scenarios. Engage representative users to interact with the system under a set of scenarios representing various conditions (ie normal, degraded and emergency states). Scenarios should incorporate emotional and cognitive states where possible, for example how fatigue, stress, distraction, or time pressure affect user performance and decision-making. These factors should inform both test evaluation and subsequent design refinements. Assess behaviour, usability, and error likelihood in response to alerts and warnings and capture usability, understanding, and safety concerns and conduct interviews or surveys to gather qualitative insights (eg perceived clarity, comfort, confidence).
- Flag deployment risks. Identify any issues that may compromise safe deployment if not addressed. Consider the need for additional risk controls or change management processes.
Next, Phase 4: In-service