24.10.2025 –, LoTHR
Language: English
Variational Autoencoders (VAEs) were first introduced as early concept learners in the vision domain. Since then, they have become a staple tool in generative modeling, representation learning, and unsupervised learning more broadly. Their use as analogues of human cognition is one of the first steps towards the understanding of more complex cognitive models leading up to models of human brain function and behavior. As part of a series of talks on cognitive science and deep learning at the RealRaum in Graz, this presentation will focus on the role of VAEs in cognitive science research.
Topics:
- Supervised vs. unsupervised learning
- Deep Learning basics: classifiers and backpropagation
- Autoencoders: architecture, training, embedding, and generative modeling
- Variational Autoencoders: statistical latent space, and the reparametrization trick
- Training VAEs: loss functions, optimization, and the KL divergence
- Concept learning: VAEs in cognitive science
Alberto Barradas, local member of the RR is working on his PhD in Cognitive Science on the topics of Attention, Mindfulness, and Consciousness at the TUGraz. He's a data analyst and computer scientist from León, Mexico living in Graz for the last 5 years. He's always happy to share his questions about the mind and listen to everyone's experience of being human.