VB

Prof. Dr. Vasileios Belagiannis

Professorship for Machine Learning in Signal Processing

Professors

Address

Cauerstraße 7-9 91058 Erlangen

Vasileios Belagiannis is Professor at the Faculty of Engineering of the Friedrich-Alexander-Universität Erlangen-Nürnberg. He holds a degree in engineering (Greece, 2009) from Democritus University of Thrace, Engineering School of Xanthi and M.Sc. in Computational Science and Engineering from TU München (Germany, 2011). He completed his doctoral studies at TU München (2015) and then continued as post-doctoral research assistant at the University of Oxford (Visual Geometry Group). Prior to joining Friedrich-Alexander-Universität Erlangen-Nürnberg, he spent time in industry, working at OSRAM, and then Ulm University and Otto von Guericke University Magdeburg.

An updated list of publications can be found on Google Scholar.

I am always looking for highly motivated students to undertake a PhD.

Recent News:

Prof. Belagiannis heads the Machine Learning & Perception (MLP) Group. The MLP Group focuses on machine learning, particularly deep learning, in both basic and applied research. Current research includes generative models, anomaly detection, uncertainty estimation, distribution misalignment detection, few-shot learning, learning from noisy labels, and hardware-aware machine learning such as model compression and neural architecture search. Applications include autonomous driving, computer vision and medical image analysis, as well as signal processing and robotics.

Amir El-Ghoussani

Amir El-Ghoussani, M.Sc.

Zimmer: Zimmer 02.026

Julian Wiederer

Julian Wiederer, M.Sc.

Michele De Vita

Michele De Vita, M.Sc.

Zimmer: Zimmer 02.026

Marc Holle

Marc Hölle, M.Sc.

Zimmer: Zimmer 02.026

Rohan Asthana

Rohan Asthana, M.Sc.

Zimmer: Zimmer 02.026

Azhar Hussian

Azhar Hussian, M.Sc.

Juan Pablo Villa Serna

Juan Pablo Villa Serna, M.Sc.

Werner Spiegl, M.Sc.

2026

2025

2024

2023

2022

2021

2020

2019

2018

2017

  • , :
    Recurrent Human Pose Estimation
    12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 (Washington, DC, 30. May 2017 - 3. June 2017)
    In: Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017
    DOI: 10.1109/FG.2017.64
    BibTeX: Download
  • , , , , , , , :
    Preface DLMIA 2017
    In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - 3rd International Workshop, DLMIA 2017 and 7th International Workshop, ML-CDS 2017 Held in Conjunction with MICCAI 2017, Proceedings, Springer Verlag, (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol.10553 LNCS)
    BibTeX: Download

2016

2015

2014

2012

2009

  • , , , , , :
    The vision system of the ACROBOTER project
    2nd International Conference on Intelligent Robotics and Applications, ICIRA 2009 (SGP, 16. December 2009 - 18. December 2009)
    In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    DOI: 10.1007/978-3-642-10817-4_94
    BibTeX: Download

  • Subcontracting within the framework of the ÖGP NXT GEN AI METHODS – Generative methods for perception, prediction, and planning (NXTAIM)


    (Third Party Funds Single)
    Project leader:
    Term: 1. January 2026 - 31. December 2026
    Acronym: ÖGP NXT GEN AI METHODS
    Funding source: Industrie
    URL: https://nxtaim.de/en/home/

    The aim of this project is to develop self-playing, multi-agent simulators based on GPUDrive. In this context, we will develop trajectory planning strategies with a focus on efficient training and optimisation. These strategies will then be tested in automated driving scenarios. 

  • Bavarian Advanced Resolution Radar


    (Third Party Funds Group – Sub project)
    Overall project: Bavarian Advanced Resolution Radar
    Project leader:
    Term: 1. February 2025 - 31. January 2028
    Acronym: BAVAR-RADAR
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)
  • SUSTAINET-inNOvAte: Nachhaltige Technologien für fortschrittliche resiliente und energieeffiziente Netze - Reibungslose, sichere und widerstandsfähige Netze für die dynamische digitale Welt


    (Third Party Funds Group – Sub project)
    Overall project: Sustainable Technologies for Advanced Resilient and Energy-Efficient Networks - Frictionless, secure, and resilient communication networks for the dynamic digital world
    Project leader:
    Term: 1. January 2025 - 31. December 2027
    Acronym: SUSTAINET-inNOvAte
    Funding source: BMFTR / Verbundprojekt
  • Subcontracting within the framework of the ÖGP NXT-AIM Generative Modeling


    (Third Party Funds Single)
    Project leader:
    Term: 1. January 2024 - 31. December 2026
    Acronym: Unterbeauftragung ÖGP NXT-AIM
    Funding source: Industrie
    URL: https://nxtaim.de/en/home/

    The project deals with two aspects of generative modeling. First, generative models, especially fundamental models, have made a significant contribution in the areas of image, text, and audio. However, they have not yet been well researched for sequential and unstructured data, such as automotive data. Second, the latent spatial representation in generative models is not interpretable. However, this is linked to the predicted or generated output. This project aims to explore generative models for trajectory planning by integrating robustness measures. 


  • Always-on Deep Neural Networks


    (Third Party Funds Single)
    Project leader:
    Term: 1. March 2023 - 28. February 2026
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    Computer vision contributes in creating visual priors as self-contained tasks or input to another system. In the context of autonomous navigation, the system can be a mobile agent that not only relies on the raw sensory inputs, but also on computer vision algorithms for understanding the environment. Recent studies on embodied agents show that an agent acts more accurately when visual priors such as semantic segmentation, depth estimation are provided next to the raw input data. Producing the visual priors though comes at the cost of data collection and annotation. The latest approaches build on deep neural networks, which are trained with supervision. For that propose, a large pool of data and annotations has to be created prior to training the model. To address this limitation, simulation is an alternative source for data and annotation generation. In the context of deep neural networks, it can be considered for the replacing the real-world, where a large amount of synthetic data is created according to the task in place. Although, the data simulation has clear advantages over the real-world datasets, there is also a clear limitation. Training a deep neural network with synthetic data does not result in good performance on real-world data.In this research project, we are going to conduct research on closing the performance drop when transferring deep neural network models from the simulation to real-world applications. Our testbed for measuring the performance will be semantic image segmentation and depth estimation from a single image. In our research, we will propose algorithms that teach a deep neural network how to fast learn adapting into new environments. This concept is widely known as meta-learning. In this project, it will be explored for learning a model in simulation and then transferring it to the real-world. Meta-learning has never been seen as a way to tackle model transfer, but its formulation suits well to the problem.
  • Transfer von tiefen neuronalen Netzen von der Simulation in die reale Welt


    (Third Party Funds Single)
    Project leader:
    Term: 1. December 2022 - 30. November 2025
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Computer vision contributes in creating visual priors as self-contained tasks or input to another system. In the context of autonomous navigation, the system can be a mobile agent that not only relies on the raw sensory inputs, but also on computer vision algorithms for understanding the environment. Recent studies on embodied agents show that an agent acts more accurately when visual priors such as semantic segmentation, depth estimation are provided next to the raw input data. Producing the visual priors though comes at the cost of data collection and annotation. The latest approaches build on deep neural networks, which are trained with supervision. For that propose, a large pool of data and annotations has to be created prior to training the model. To address this limitation, simulation is an alternative source for data and annotation generation. In the context of deep neural networks, it can be considered for the replacing the real-world, where a large amount of synthetic data is created according to the task in place. Although, the data simulation has clear advantages over the real-world datasets, there is also a clear limitation. Training a deep neural network with synthetic data does not result in good performance on real-world data.In this research project, we are going to conduct research on closing the performance drop when transferring deep neural network models from the simulation to real-world applications. Our testbed for measuring the performance will be semantic image segmentation and depth estimation from a single image. In our research, we will propose algorithms that teach a deep neural network how to fast learn adapting into new environments. This concept is widely known as meta-learning. In this project, it will be explored for learning a model in simulation and then transferring it to the real-world. Meta-learning has never been seen as a way to tackle model transfer, but its formulation suits well to the problem.

Lectures

  • Machine Learning in Signal Processing
    • Further information is available on StudOn and Campo.
  • Introduction to Deep Learning
    • Further information is available on StudOn and Campo.
  • Advanced Topics in Deep Learning
    • Further information is available on StudOn and Campo.
  • Perception in Robotics
    • Further information is available on StudOn and Campo.

Guide to scientific work

  • Guide to scientific work
    • Further information is available on StudOn and Campo.

Seminars

  • Seminar on Selected Topics in Machine Learning
    • Further information is available on StudOn and Campo.
  • Seminar on Selected Topics in Multimedia Communications and Signal Processing
    • Further information is available on StudOn and Campo.
  • Seminar on Selected Topics in Machine Learning
    • Further information is available on StudOn and Campo.
  • Seminar über Bachelor- und Masterarbeiten
    • Further information is available on StudOn and Campo.

Lab Courses

  • Machine Learning in Signal Processing
    • Further information is available on StudOn and Campo.