Automated Event Classification During Scenario Identification in Real-World Drives
In recent years, the development of autonomous driving systems (ADS) has been leveraging simulation-based testing as opposed to conventional drive-based testing. Realistic 3D environments, accurate physics modeling, and sophisticated simulators enable developers to challenge their perception and control systems in a virtual setting. One of the many challenges of this approach is the selection and creation of relevant test scenarios to allow a thorough validation and verification of the targeted ADS function.
Overcoming challenges in autonomous vehicle testing
The possibility of errors that can occur in the real world is enormous and ranges from unpredictable environmental conditions to the dynamics surrounding the autonomous vehicle. Furthermore, the high degree of complexity of real-world traffic situations results in virtually infinite possibilities. How can we ensure that autonomous vehicles operate in a reliable and safe way while coping with a large variety of road layouts, multiple dynamic traffic participants, adverse weather conditions, and unexpected situations?
There are two complementary approaches to create appropriate test scenarios that replicate reality: data-driven and knowledge-based methods. The data-driven approach relies on observation and measurement of real- or virtually occurring situations. By driving on actual roads, one encounters many valid traffic scenarios that are relevant for system testing. This approach however does not guarantee a balanced scenario set, because many edge cases, by definition, are rare. This is where the knowledge-based approach comes in, by focusing on the parametric generation of scenarios, this method allows the exploration of large parameter space of a single scenario in a short time to test the boundaries of the ADS.
Real-world testing remains a relevant source for scenarios
Front loading the testing in the development process is necessary because the amount of drives it takes to fine-tune and validate autonomous systems is unattainable in the real world. However, field drives remain a relevant component of the entire process to establish scenarios grounded in reality, and to test the systems in the real world once they attain a certain level of maturity. In each case, there is a need to accurately translate a real-world recording to a replicable scenario in a simulator. This is achieved through a three-step process:
- Scenario Vectorization: In this step, the sensor recordings (Camera, LiDAR, GPS, Radar) are translated into a digital representation of the scene; road boundaries, lane markers are detected (or matched to existing imported map), vehicles and pedestrians are detected, and their trajectories recorded.
- Scenario Identification: Next, the scenes are analyzed, and interesting traffic situations are selected which can cover the spectrum of tests for the intended ADS feature to be validated.
- Scenario Extraction: Finally, the static and dynamic components of selected scenes are exported into OpenDRIVE and OpenSCENARIO standard formats so that they can be imported into a wide range of simulation software solutions.
Event-based scenario identification
To maximize the value of drive recordings, finding relevant scenarios is a critical step. Some situations can be identified by telemetry cues, AD failure mode markers, and even manual identification, but they often do not set the context for the situations and require manual follow-up to classify the scenarios.
When a field drive is accurately converted into a vectorized representation, it is possible to perform a detailed analysis of the movement of the ego vehicle and the other traffic participants. Scenarios generated based on proximity alone can highlight potential hazardous situations, but further categorization that describes the type of event is required to help find these back later in the testing phase. This requires detailed tagging of the scenarios through the interpretation of the real-world situation.
By analyzing the behavior of traffic participants in the context of the road, NavInfo can detect and assign specific maneuver- and event types to ego- and traffic participants. With event-based scenario identification, it is possible to automatically tag cut-ins, cut-outs, lead vehicle deceleration, lane changes, and other traffic situations. This creates a higher order scenario tagging solution that goes beyond digitization and helps classifying scenario types directly in the drive data processing pipeline.
Scenario identification and extraction from field drives remain highly relevant in the ADS validation and verification process. With AI-enabled automation, it is possible to digitize entire drives effectively and accurately and select relevant traffic situations with Event-based scenario identification for replication in a simulator. By interpreting the behavior of actors and the context, it is also possible to create more meaningful scenario labels and save the time and cost of having to manually classify extracted scenarios into specific types.
The resulting scenarios can be easily sorted and selected by known scenario types. When combined with additional tags, such as type of actor, distances, speeds, weather conditions, and road geometry characteristics, relevant scenarios can be cataloged and retrieved more easily.