Description

The XPro project tackles an important problem in engineering and robotics, namely the problem of probabilistic explainability of deep-models for robot-critical applications such as autonomous robot- perception, self-driving robot-vehicles, and decision making. In this project we will focus on explainability (which is not the same as interpretability) by developing post-hoc techniques applied to existing pre- trained deep models applied to robot perception. In terms of case studies, XPro will explore two application domains, robotic perception and autonomous robotic-vehicles. Collectively, the XPro team will work in collaboration with young and senior researchers, as well as PhD students, for developing new probabilistic-based post-hoc calibration towards explainable models (WP1), new uncertainty- quantification evaluation metrics (WP2), real-world application domains (ie, relevant case-studies) (WP3), dissemination and short-term missions will be carried-out as well (WP4).

Overarching Goal

The overarching goal of the collaborative XPro project is to bring together young and senior researchers, alongside with Ph.D. students, and advance further the emerging field of ‘XML’ (Explainable Machine Learning) by developing new post-hoc calibration techniques for safety-critical robots applications, namely sensory- perception for robots and autonomous systems. We propose to achieve this goal by setting out to fulfill the following three sub-goals, each achieved within a separate work package (WP).

  • [Objective 1]: Developing new explainability frameworks for existing pre-trained deep architectures applied to critical/safety classification in mobile robotic problems (WP1).
  • [Objective 2]: Evaluating the proposed frameworks, and their components, by means of state-of-the-art existing and novel quantification metrics that take account for partial information and misclassification (cfr. WP2).
  • [Objective 3]: Exploration and validation of the developed frameworks in relevant application domains, namely perception systems for robots and self-driving (object detection) thus, linked with WP3.

These objectives are embedded within the umbrella of publication and dissemination activities which are part of the WP4. These objectives are ambitious, but encouraging, particularly because we will deal with challenging areas (explainability of deep AI/ML applied to real-world problems), with direct and indirect impacts on the decision-making mechanisms of safety systems; besides, the objectives reflect the inter and multidisciplinary scientific nature of the project. These objectives stem from the determination of the Team to progress beyond the state-of-the-art. The following expected results reflect the key XPro’s scientific outcomes: - Creation of probabilistic explainable framework to cope with realistic (noisy/uncertain) data;- Development of new evaluation performance metrics tailored to explainable deep-models w.r.t. a probabilistic perspective; - Deploy the developed framework in real-world cases and validate them (robotics and self-driving perception) subjected to a safety-in-the-loop concept; - Establishment and consolidation of ‘X-ML’ (i.e., explainable machine learning) as a key research area in the beneficiary institutions (ISR-UC and FU-Berlin) and strengthen the scientific “ecosystem” which will support and leverage new directions eg, international/EU projects. To accomplish XPro's technical and scientific objectives, the steps and activities that will be carried out throughout the project have been divided into four Research-Technical WPs (WP1-4), complemented by a work-package related to the project coordination/management (WP5), impact creation and dissemination of the results. A detailed technical description of the work plan, their links along the project, and the expected technical-scientific results per WP are described in a dedicated section (“Work Plan”).