Seminars and Events

Artificial Intelligence Seminar

Geometric and Spectral Biases in Generative Adversarial Networks

Event Details

Generative Adversarial Networks (GANs) have become one of the most successful and popular generative models in the recent years, with a wide variety of applications, such as image and audio manipulation and synthesis, style transfer, and semi-supervised learning, to name a few. The main advantage of GANs over their classical counterparts stems from the use of Deep Neural Networks (DNNs), in both the sampling process (generator) and the energy evaluation process (discriminator), which can utilize the ongoing revolution in the availability of data and computation power to effectively discover complex patterns. Yet, with this exceptional power, comes an exceptional limitation: the black-box behavior associated with DNNs, which not only places the profound promise of GANs under a shadow of mistrust, but also greatly slows down the efforts to improve the efficiency of these models. As such, studying GANs’ limitations and biases is critical for advancing their performance and usability in practice. The primary focus of this talk is to present two such limitations, namely a geometric limitation in generating disconnected manifolds, and a spectral limitation in learning distributions carried by high frequency signals. I will discuss the causes and consequences of these limitations both empirically and theoretically, and propose solutions for overcoming them.

Special seminar hosted by Wael Abd-Almageed, POC Maura Covaci

Speaker Bio

Mahyar Khayatkhoei is a research scientist at LivePerson Inc. working on end-to-end multi-domain dialogue systems. His research goal is to develop data-efficient generative models that can accurately represent real-world phenomena, with diverse applications ranging from computer animation and simulation to representation learning and uncertainty modeling. He is particularly interested in understanding the limitations in Deep Neural Networks (DNNs) in the context of generative models, and formalizing how different choices in the architecture and optimization of DNNs induce different inductive biases. He has received his M.Sc. and Ph.D. in computer science from Rutgers University working with Dr. Ahmed Elgammal, and his B.Sc. in electrical engineering and control systems from the University of Tehran working with Dr. Majid Nili Ahmadabadi.