11.11.2024 12:56
Research projects, collaborations
The BMBF is funding two new joint projects with LMU participation. One teaches AI models causal relationships, the other refines the tactile abilities of robots.
Currently available machine learning models are typically based on correlations, but not causality. In other words: They create their results based on probabilities without being able to recognize any real connections. This can lead to errors and ultimately poor performance. ChatGPT, for example, occasionally spits out answers that sound nice, but the content is nonsensical or incorrect. The potential of artificial intelligence is also limited in medical applications because the programs cannot establish causal relationships. With a model that links cause and effect, more targeted treatment decisions could be made. The same applies to areas of application in science, business and the public sector.
The new CausalNet joint project, funded by the Federal Ministry of Education and Research (BMBF) with almost two million euros, has set itself the goal of launching a new generation of machine learning within three years. “We want to develop novel methods for integrating causality into machine learning models,” says Professor Stefan Feuerriegel. He is head of the Institute of Artificial Intelligence (AI) in Management at LMU and spokesman for CausalNet. In order to integrate the principle of cause and effect into future AI models, Feuerriegel works in conjunction with experts from Helmholtz AI, the Technical University of Munich (TUM), the Karlsruhe Institute of Technology and Economic AI GmbH.
The team aims to overcome the unique challenges of causal machine learning in high-dimensional environments using tools from representation learning, the theory of statistical efficiency and specific machine learning paradigms. “In addition, we will derive the effectiveness and robustness of our methods with theoretical results,” says AI expert Feuerriegel. This is important to ensure the reliability of the proposed methods. “We then bring causal machine learning into real-world applications and demonstrate the concrete benefits for business, the public sector and scientific discoveries.”
CausalNet wants to further promote practical use and further development by making the developed software, tools and results publicly accessible according to the opera source principle. “In the next three years, we will take machine learning to a new level and make AI applications more flexible, efficient and robust,” says Feuerriegel.
GeniusRobot: Robots that see and grip better through AI
Reliably gripping and manipulating any object is one of the central challenges in robotics, from applications in production to use in medicine. In this context, control methods that dynamically adjust the handle are still largely unexplored. “These require a targeted prediction of the effects of an interaction between the robot and its environment, which is to be achieved in this project using generative AI,” explains Professor Gitta Kutyniok, holder of the Chair of Mathematical Foundations of Artificial Intelligence at the LMU. This is the only way a robot can adapt flexibly, resiliently and efficiently to changes in the environment, in the object to be grasped or in the activity itself. “Robots can, for example, react immediately if an object threatens to slip out of your hand,” says Professor Björn Ommer from the AI Chair for Computer Vision and Digital Humanities/the Arts.
Such control requires not only visual and tactile sensors that can detect contact and shear forces, but also corresponding multimodal models from artificial intelligence (AI) that can integrate and interpret sensory information from multiple complementary sources. This is exactly where the GeniusRobot research project comes in, in which the working groups of Gitta Kutyniok and Björn Ommer at the LMU are involved. The project partner institutions also include the Technical University of Nuremberg and the Technical University of Dresden.
“Our goal is to develop new, interpretable AI models with which methods from the field of generative AI can be used to generate tactile information from image data in robotics,” explains AI expert Kutyniok. To plan gripping movements, tactile sensor data should be predicted from camera data. “Conversely, these predictions are converted back into camera images using another generative model, so that the changes to objects caused by the robot’s movements and manipulations can be directly visualized,” adds AI researcher Ommer. This also makes it possible to manipulate hidden objects that can only be partially captured by the camera.
A key development focus is on the interpretability of the models, which is essential for the use of generative AI in safety-critical environments. In the future, the results also open up new application scenarios in automated production and human-machine interactions and provide new scientific findings in the area of safe and multimodal AI.
Scientific contact person:
Prof. Stefan Feuerriegel
Institute of Artificial Intelligence (AI) in Management
Ludwig Maximilian University of Munich
Email: [email protected]
Prof. I’m not waiting
Chair of Mathematical Fundamentals
of understanding artificial intelligence
Ludwig Maximilian University of Munich
Tel.: +49 (0)89 2180-4401
Email: [email protected]
Prof. Björn Ommer
Chair of AI for Computer Vision
and Digital Humanities/the Arts
Ludwig Maximilian University of Munich
Tel.: +49 (0)89 2180-73431
Email: [email protected]