I will describe a conjecture on learning theory. Several different architectures that perform will have emerged, in addition to CNN, such as transformers, perceivers and MLP mixers. Is there a common motif to all of them and to their good performance? A natural conjecture is that these architecture are well-suited for the approximation, learning and optimization of input-output mappings that can be represented by “sparse compositional” functions. In particular, I will discuss “sparse” target functions that are compositional with a function graph that has nodes each with dimensionality at most k, with k << d where d is the dimensionality of the function domain.
Tomaso A. Poggio, is a physicist whose research has always been between brains and computers. It is now focused on the mathematics of deep learning and on the computational neuroscience of the visual cortext. He is the Eugene McDermott Professor in the Dept. of Brain & Cognitive Sciences at MIT and the director of the NSF Center for Brains, Minds and Machines at MIT. Among other awards he received the 2014 Swartz Prize for Theoretical and Computational Neuroscience and the IEEE 2017 Azriel Rosenfeld Lifetime Achievement Award. A former Corporate Fellow of Thinking Machines Corporation, a former director of PHZ Capital Partners, Inc. and of Mobileye, he was involved in starting, or investing in, several other high tech companies including Arris Pharmaceutical, nFX, Imagen, Digital Persona, Deep Mind and Orcam.
Host: Mohammad Rostami, POC: Peter Zamar
YOU ONLY NEED TO REGISTER ONCE TO ATTEND THE ENTIRE SERIES – We will send you email announcements with details of the upcoming speakers.
Register in advance for this webinar: https://usc.zoom.us/webinar/register/WN__0VhakI6Q6i3JsasdmNWcA.
After registering, you will receive an email confirmation containing information about joining the Zoom webinar.
The recording for this Interview Seminar talk will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.