Welcome to DeepREAL

The Deep Robust & Explainable AI Lab at the University of Delaware consists of computer scientists whose mission is to develop safe, reliable, and explainable AI models, upon which cross-disciplinary research can advance synergistically, particularly for high-stake use in science, medicine, and autonomous systems.

We work in the areas of Machine Learning, Computer Vision, and Safe Learning System.

ROBUST
MACHINE LEARNING

Tackle the out-of-distribution (OoD) challenge that involves dynamic, long-tail, or previously unseen data. Our works also optimize for modern HPC platform to manage the scaling law.

EXPLAINABLE
MACHINE LEARNING

Safeguard AI predictions with valid rationales for safety and reliabIlity. Our works provide reasons for AI/ML decisions in a way domain experts can understand and can potentially lead to scientific knowledge discovery.

SAFE AI
APPLICATIONS

Develop safe learning systems for critical domains where safety and reliability cannot be compromised. Our group has particular expertise in safe AI for science, medicine, and autonomous systems.