nxtAIM - NXT GEN AI METHODS

Project duration
January 01, 2024 - December 31, 2026
Project partners
- Continental Automotive Technologies GmbH
- Aptiv Services Deutschland GmbH
- AVL Deutschland GmbH
- Capgemini Engineering S.A.S. & Co. KG
- DENSO Automotive Deutschland GmbH
- Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
- dSPACE GmbH
- Forschungszentrum Jülich GmbH
- FZI Forschungszentrum Informatik
- HELLA GmbH & Co. KGaA
- HELLA AGLAIA MOBILE VISION GmbH
- Hochschule für angewandte Wissenschaften München
- IPG Automotive GmbH
- Ludwig-Maximilians-Universität München
- Mercedes-Benz AG
- Technische Universität Berlin
- Universität Freiburg
- Universität Tübingen
- Valeo Schalter und Sensoren GmbH
- ZF Friedrichshafen AG
Funding
The project is funded by the German Federal Ministry for Economic Affairs and Climate Action under grant no. 19A23014I.
Project Description
Automated driving functions are still very limited in their scope of use. The reasons lie in the system architecture used today and the discriminatory machine learning methods used. Based on generative methods, NXT GEN AI METHODS introduces the bidirectional flow of information as a new paradigm into the chain of effects and enables massive improvements for the development of autonomous driving functions. In detail: Better scalability via an inexhaustible reservoir of data for offline testing, validation, training and online error detection; better transferability based on the ability to deconstruct and recombine semantic information and the expansion of the ODD through targeted scenario and sensor data generation; better traceability through online verification, the plausibility of the individual processing steps of the chain of effects during operation, as well as for understanding the latent space. Foundation models for driving data will be created as an outstanding result for industrial use.
To achieve the set goals, a work plan with six sub-projects (TP) is used: While TP1 "Generative Models for Sensors" generates sensor data from the environment model based on individual images, TP2 "Generative Autoregressive Models for Image Sequences" maps the dynamic development of the environment in the sensor space. TP3 "Generative Models for Abstract Scenarios and Planning" combines the environment model with prediction & planning and uses generative models to bring the feedback into the chain of effects. TP4 "Automotive Foundation Models and Latent Space" is responsible for structuring the environment model as a general latent space into interpretable, learned components. How the new approaches can be brought into a system that can be executed in the vehicle forms the core of TP5 "Automotive Scalability". Finally, TP6 “Plausibility” evaluates the systems created in TP1, TP2, and TP3 with regard to their degree of realism.