Phase 4: In-service | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Non-production environment - editcd.usc.edu.au

Phase 4: In-service

Level Crossing Human Factors Integration Toolkit
Objective

The aim of the In-service phase is to install and configure the verified technology at operational sites, validate that the deployed system meets stakeholder needs, define and execute the ongoing activities necessary to keep the technology reliable and safe, monitor and analyse system performance over time, and support planned or reactive changes to ensure systems remain effective, safe, and aligned with stakeholder and regulatory expectations over time.

Initial deployment

Description

Deployment transitions the system from lab/test environments to operational settings. This involves configuring hardware and software at each crossing, preparing infrastructure, and co-ordinating between multiple stakeholders (eg road and rail authorities, local governments). Complex deployments may require post-installation acceptance testing at each crossing to verify functionality and safety compliance. A Deployment Plan may be necessary to coordinate activities and responsibilities, particularly when multiple agencies such as rail operators, road authorities, and local governments are involved. 

Key activities
  1. Develop deployment strategy. Define which components or system capabilities will be deployed, at which crossings, and the timing of each phase. Consider funding allocation, operational impact, and safety priorities across the deployment schedule.
  2. Write deployment plan (optional). Develop a detailed deployment plan if multiple crossings, agencies, or system configurations are involved. Key considerations include:
    • Facilities readiness such as power, communications, and lighting at crossing sites
    • Training needs for operational staff and maintenance teams
    • Co-ordination with stakeholders (eg rail operators, emergency services)
    • Analysis of alternatives and agreement on sequencing priorities
  3. Perform deployment activities. Prepare and mobilise resources, verify infrastructure readiness, and execute deployment. Each phase should end with verification and safety acceptance tests before operational handover.

Human Factors contributions to initial deployment

Human Factors specialists support initial deployment by ensuring that deployed systems are operationally usable and safe in real-world settings, with supportive environments, processes and any relevant training in place.

  1. Support deployment planning. Ensure Human Factors considerations are incorporated into aspects such as signage, task design, and training.
  2. Operational readiness input. Contribute to assessments of site readiness, usability, and end-user preparedness.
  3. Human Factors acceptance support. Confirm Human Factors requirements have been met before operational use; and assist with final inspections.
  4. Stakeholder coordination. Support alignment between road and rail stakeholders on Human Factors-related concerns, including contingency operations and user communication.

System validation

Description

While earlier verification processes focus on whether the system is built according to requirements, the System validation phase assesses whether the “right” system was built. Validation focuses on outcomes in real-world contexts such as driver interpretation of alerts, and system performance during routine or degraded operations. Validation planning begins early in the project but is executed post-deployment, often at trial sites.

Validation compares system performance against stakeholder needs and safety goals, using behavioural, technical, and perceptual data. Context-specific conditions (eg night driving, rural settings) are essential to incorporate for a comprehensive evaluation.

Key activities

The validation process includes three main activities:

  1. Develop validation strategy. Define outcome-based metrics (eg reduced violations, alert comprehension). Determine need for simulation, trials, or phased rollouts.
  2. Plan validation. Engage stakeholders, identify representative sites, and prepare protocols for behaviour observation and data collection.
  3. Validate system in operations. Assess performance and user response in situ. Compare findings to baseline data and project objectives.
Human Factors contributions to validation

Human Factors ensures that the system built meets the actual needs of users and stakeholders in the real-world context, rather than just fulfilling technical requirements. Human Factors input is vital to the Validation phase, and supports testing of whether the technology is usable, understandable, and supports safe and effective human interaction. See the Human Factors Guidance for Evaluating Innovative Level Crossing Technologies for further information that may support this activity.

  1. Validation planning. Translate user needs (identified via Human Factors methods such as task analysis, incident analysis, or stakeholder workshops) into validation criteria. Recommend data collection methods tailored to human performance, such as behavioural observations, interviews, and cognitive walkthroughs. Provide input into experimental design (eg pre/post comparisons, observational studies, driver interviews), and identify context-specific scenarios (eg night-time driving, unusual signage placement) that need targeted assessment.
  2. Define Human Factors-related metrics. Performance metrics may be objective, such as driver compliance rates (eg reduced non-compliances), comprehension of warning symbols/messages, and reaction times. Metrics may also be subjective, such as user satisfaction and perceived trust in the system.
  3. Conduct or observe field validation. Lead or support usability testing and simulated field trials involving the collection of real-world data on driver interaction with the system during trial operations. Assess residual risks and unintended consequences from a user/system interaction perspective (consider use of Human Factors Risk Assessment Prompts).
  4. Post-implementation Human Factors review. As part of the broader post-implementation review, Human Factors specialists may evaluate:
    • Whether identified Human Factors controls were implemented and effective
    • Any unintended consequences or new Human Factors issues arising after deployment
    • Whether the system supports intended behaviours under operational stressors

Operations and maintenance

Description

Operations and maintenance covers both proactive (eg scheduled inspections, firmware updates) and reactive (eg fault repair, bug fixes) work. For low-cost systems, special attention is required to ensure that operations and maintenance activities are feasible, sustainable, and human-centred.

Operations and maintenance includes asset configuration, roles and responsibilities, escalation protocols, and support infrastructure. As systems evolve, documentation, training, and maintenance procedures must remain up-to-date.

Key Activities
  1. Plan operations and maintenance activities. Develop detailed plans (building on those defined during Operational Concept Development), including role definition, costing models, procedural documentation, training, and regulatory compliance plans.
  2. Collect operations and maintenance information. Track events such as downtime, user-reported issues, environmental wear, and software bugs. Use data to inform ongoing risk management.
  3. Perform operations and maintenance. Perform maintenance as scheduled or as required. Keep records current and integrate changes via configuration and version control

Human Factors contributions to operations and maintenance

Human Factors can help to support sustainable, human-centred Operations & Maintenance practices that reduce error, improve response times, and ensure system reliability under real-world conditions.

  1. Designing for maintainability. As noted in previous phases, Human Factors should influence design for ease of inspection, calibration, and repair activities (eg ensuring sensor mounts are accessible). Inspection and maintenance tasks should be designed to minimise injury risk, ensure accessibility for workers and support effective communication and handovers between shifts and teams. Further, consideration should be given to the design of the user interface for monitoring and maintenance equipment (eg service software, diagnostics dashboards).
  2. Develop training and procedures. Support design of awareness raising materials for primary users that provide clear, concise and actionable messages, in addition to more detailed training for secondary users (eg workers interfacing with the system) to ensure they hold the appropriate knowledge, skills and experience to conduct safety-critical tasks. Procedures should be designed to be usable, taking into account the sequential nature of tasks, and appropriately balance guidance with flexibility to allow for positive adaptations in the way in which work is achieved.
  3. Contribute to organisational Human Factors considerations. Support activities such as workload assessment across teams tasked with monitoring/responding to issues (eg train controllers) or managing the maintenance of assets (eg road or rail maintenance teams). Consider staffing models that account for cognitive and physical demands.

Performance monitoring

Description

Performance monitoring tracks how systems and users operate in real-world contexts, capturing quantitative and qualitative indicators. This includes system reliability, driver behaviour, safety risk, and emerging operational risks. Monitoring integrates both reactive data (eg incident reports, complaints) and proactive data (eg periodic audits, user feedback). Outcomes of monitoring should be an input to reviews, design refinements or the consideration of additional risk controls such as the provision of information or training.

Key Activities
  1. Define metrics and monitoring strategy. Align performance indicators with project goals and validation outcomes. Include technical metrics (eg false alert rates) and human factors metrics (eg usability ratings, comprehension errors).
  2. Collect and analyse data. Use automated systems, field audits, and feedback channels to gather and assess performance data. Identify trends, patterns, or deterioration in safety.
  3. Respond to findings. Use findings to develop corrective actions such as system refinements, policy adjustments, or planning for wider upgrades. Engage stakeholders by providing regular performance summaries.
Human Factors Contributions to performance monitoring

Human Factors can help to support the identification of appropriate performance monitoring strategies, metrics and strategies to address adverse trends or issues identified. The activities may be used to support performance monitoring. See the Human Factors Guidance for Evaluating Innovative Level Crossing Technologies for further information that may support this phase.

  1. Define Human Factors-related metrics. Identify and validate measures associated with driver behaviour/responses, interface understanding, workload, and error likelihood.
  2. Trend analysis. Use a Human Factors lens to interpret behavioural data and identify potential drift in practices (ie risky behaviours emerging) or areas of increased risk.
  3. Support investigations. Provide Human Factors expertise to identify user behaviours and contributory factors to result in incidents. Incident investigation and analysis methods such as AcciMap or STAMP-CAST may be useful to support the identification of contributory factors within a just culture environment.
  4. Support development of corrective actions. Advise on interface refinements, needs for additional information or training, or wider policy shifts to address Human Factors-related trends.
  5. Governance integration. Ensure Human Factors monitoring is part of reporting structures, review boards, and risk oversight processes.

Upgrades and evolution

Description

Upgrades may be triggered by evolving needs, new safety standards, hardware obsolescence, identification of adverse safety trends, or user feedback/complaints. Upgrades may be the results of phased rollouts or be triggered due to unexpected incidents. Documentation quality varies, so reverse engineering may be required before applying forward engineering practices.

Key activities
  1. Analyse changes and triggers. Distinguish between proactive enhancements (eg new features) and reactive fixes (eg compatibility issues). Conduct impact assessments.
  2. Reverse engineering. Document existing interfaces and behaviour if records are missing. Understand user tasks and potential failure points.
  3. Forward engineering. Follow the V system lifecycle: update requirements, revise designs, verify and validate changes.
Human Factors contributions to upgrades and evolution

Human Factors specialists can contribute to upgrade and evolution activities by ensuring that changes to the system (either planned or reactive) do not introduce new Human Factors-related risks or safety issues, and that they maintain alignment with user needs and the operational context.

  1. Change impact assessment. Evaluate how changes (eg sensor model replacement, updates to the HMI) affect human performance, particularly for system operators, maintenance personnel, and primary users (eg vehicle drivers). Use Human Factors techniques like cognitive walkthroughs, scenario-based analysis, or  human error identification methods (eg SHERPA, HAZOP) to consider potential risks.
  2. Reverse engineering support. Where legacy system documentation is missing, Human Factors specialists can help reconstruct user interaction patterns and task flows to identify critical user functions at risk during change, and to develop training and documentation to support the new configuration.
  3. Design and validate upgrades. Human Factors should be integrated into the revised system lifecycle. For example, Human Factors requirements and user scenarios should be updated, interfaces re-designed to reflect upgraded functionality and usability testing conducted regarding new or modified features, with refinements made as required, and validation of changes undertaken post-deployment, to ensure that performance and usability are not degraded.
  4. Lessons learned and continuous improvement. Human Factors-related lessons from changes/upgrades should be documented to inform future technology deployments and system design standards and processes. Lessons learned can be shared to support the refinement of HFI guidance, especially for low-cost or rapidly developed technologies.

Next, Phase 5: Decommissioning