Safe Learning-enable System for Autonomous Vehicle

Funding Source: NSF SLES Medium

Budget: $1,499,949

Time: 10/2024 - 09/2028

PI: Dr. Xi Peng (Machine Learning)

Co-PI: Dr. Weisong Shi (Autonomous Vehicle)

Co-PI: Dr. Chengmo Yang (Hardware)

The proposed OSLA (Orchestrated Safe Learning for Autonomous driving) system.

Abstract: Machine Learning (ML) has transformed autonomous driving by enabling vehicles to perceive their environment with high precision, make real-time decisions, and operate without human intervention. However, unsafety may stem from the model—such as inappropriate extrapolation in unique scenarios—the hardware, which suffers from faults and errors, or the system, where the real-time operating system (RTOS) may not deliver decisions in time. Developing a safe learning-enabled system for autonomous vehicles (AVs) requires orchestrating the model, hardware, and system. This project focuses on cross-layer optimizations to achieve end-to-end safety by developing rational ML models with valid rationales, integrating hardware reliability into ML design to tolerate runtime faults, and designing an RTOS scheduler that ensures time predictability while considering model and hardware reliability. Implementing these advancements on real autonomous driving platforms will enhance AV safety, promote efficient transportation, and advance education and workforce development in AI and autonomous driving with a commitment to diversity and inclusion in STEM fields.

Publications:

Open-Sourced Data:

  • Prediction Rationale Dataset for ImageNet: We construct a new rationale dataset that covers all 1,000 categories in the ImageNet. For each category, we generate an ontology tree with a maximum height of two. Combining attributes and sub-attributes, this dataset contains over 4,000 unique rationales. [https://github.com/deep-real/DCP/tree/main/Rationale%20Dataset]

Open-Sourced Software:

  • Distributionally Robust Explanations (DRE): A framework to enhance machine learning (ML) model robustness against out-of-distribution data. Source code and pretrained models at [https://github.com/deep-real/DRE]. 
  • Rationale-informed Optimization: A toolbox for training ML models to ensure dual-correct predictions. Source code and pretrained weights at [https://github.com/deep-real/DCP].
  • Ordinal Ranking of Concept Activation (ORCA): A lightweight, interpretable failure detection toolkit based on concept activation rankings. Codebase at [https://github.com/Nyquixt/ORCA].