POSTGRADUATE RESEARCHER OF ARTIFICIAL INTELLIGENCE
martin.aleksandrov@fu-berlin.de
Short description: The XPro project tackles an important problem in engineering and robotics, namely the problem of probabilistic explainability of deep models for robot-critical applications such as autonomous robotic perception, self-driving robotic vehicles, and decision making. We will focus on explainability (which is not the same as interpretability) by developing post-hoc techniques applied to existing pre-trained deep models applied to robot perception. In terms of case studies, XPro will explore two application domains, robotic perception and autonomous robotic vehicles. Collectively, the XPro team will work in collaboration with young and senior researchers, as well as PhD students, to develop new probabilistic-based and post-hoc calibration towards explainable models, new uncertainty-quantification evaluation metrics, real-world application domains (i.e., relevant case-studies), dissemination and short-term missions will be carried out as well.
Short description: In 2017, the German Ethics Commission on Automated and Connected Driving postulated principles on the use of related technologies for social good in the future. Technological trust is one vital feature that follows from these principles and requires that emerging VRP applications allow for using the vehicles in a fair-and-efficient manner. Examples of such applications include shared autonomous vehicles, shared subsidized vehicles, and data-driven logistics. However, the research questions of how we can model such applications, how we can share the vehicles in a fair-and-efficient manner, and how we can solve the associated technical challenges remain largely unexplored from theoretical and empirical perspectives. The project will tackle these questions by using ideas from Computational Social Choice and Vehicle Routing Problems, thus laying down the foundations of a new research field: Social Vehicle Routing Problems.
Short description: The AMPLify project will lay the foundations of a new field, a computational behavioral game theory that brings a computational perspective, computational implementation, and behavioral insights to game theory. These foundations will be laid by tackling a pressing problem facing society today: the efficient and fair allocation of resources and costs. Allocation research has previously considered simple, abstract models like cake cutting. We propose to develop richer models that capture important new features like asynchronicity which occur in many markets being developed in our highly connected and online world. The mechanisms currently used to allocate resources and costs are limited to these simple, abstract models and also do not take into account how people behave in practice. We will therefore design new mechanisms for these richer allocation problems that exploit insights gained from behavioural game theory like loss aversion. We will also tackle the complexity of these rich models and mechanisms with computational tools. Finally, we will use computation to increase both the efficiency and fairness of allocations. As a result, we will be able to do more with fewer resources and greater fairness. Our initial case studies in resource and cost allocation demonstrate that we can improve efficiency greatly, offering one company alone savings of up to 10% (which is worth tens of millions of dollars every year). We predict a greater impact with the more sophisticated mechanisms to be developed during this project.