Learning with Deep Probabilistic Generative Models

Talk
Adji Bousso Dieng
Columbia University
Time: 
09.30.2019 11:00 to 12:00
Location: 

IRB 4105

Deep probabilistic generative models are flexible models of complex and high dimensional data. They have found numerous applications, e.g. vision, natural language processing, chemistry, biology, and physics. They are also important components of model-based reinforcement learning algorithms. This widespread use of deep generative models urges the need to understand how they work and where they fall short. In this talk I will discuss two main learning frameworks for deep generative models, the variational autoencoder (VAE) and the generative adversarial network (GAN). I will highlight their shortcomings and introduce two of my works that propose a solution to those shortcomings. More specifically, I will discuss my works on reweighted expectation maximization and entropy-regularized adversarial learning as alternatives to the VAE and the GAN approaches respectively.