I'm a Computer Science PhD student at USC's VIMAL interested in how we understand observations from multiple modalities (e.g. images, audio signals and written texts), and how we extract and build representations of the semantics that is invariant across the multimodal observations.Before starting my PhD, I studied Mathematics and EECS (Electrical Engineering and Computer Science) at MIT for my Bachelors and Masters. Along the way, I interned at a French robotics startup Keecker and academic research labs in MIT's CSAIL, Media Lab, McGovern Institute and in INRIA. After my Masters, I worked at Apple as a COOP for 9 months. My research interest lies at the intersection of representation learning and information theory, inspired by the way our perceptual system integrates multimodal sensory inputs via identifying invariant semantics. I am interested in understanding how the semantic information flows while processing observations from multiple modalities, using tools in deep learning and thermodynamic approaches to information flow.