USC at NeurIPS 2022

by Caitlin Dawson

Published on November 18th, 2022Last updated on November 28th, 2022

The Conference of Neural Information Processing Systems (NeurIPS) is one of the most competitive international venues for machine learning research and the largest gathering of researchers in this space. At this year’s event (Nov. 28 – Dec. 9), USC research represents some of the leading work in the field of machine learning, from making AI more interpretable to designing safer robotic systems and faster, more detailed MRI scans.

Opening up the black box

Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection 

James Enouen (University of Southern California); Yan Liu (University of Southern California) 

What is your paper about? The paper focuses on making AI systems more interpretable and trustworthy. Currently, we feed these algorithms a lot of data and hope that they learn the right thing. Unfortunately, they can pick up the wrong things (biases, correlations, etc.) and we might have no way of knowing. Therefore, this work is one step towards making these AI insights more interpretable and digestible to allow a human to check in with the algorithm.

Who could benefit from this research? People who want to understand “the algorithm” or “black box”; doctors who want to understand the AI systems making predictions on their patients; credit auditors; people with stakeholders who are required to explain their decisions. 

Putting robots to the test 

Deep Surrogate Assisted Generation of Environments 

Varun Bhatt (University of Southern California); Bryon Tjanaka (University of Southern California); Matthew Christopher Fontaine (University of Southern California); Stefanos Nikolaidis (University of Southern California) 

What is your paper about? In July 2022, a chess-playing robot grabbed and broke the finger of a seven-year-old boy since it mistook the finger to be a chess piece. It seems like the robot assumed the human opponent to always wait in between moves and was not tested in situations where the human made the moves quickly. This accident could have been avoided if such a scenario was found during testing. Accidents like these are why we wanted to create a method of efficiently finding corner cases involving robots or other intelligent agents. In this paper, we use machine learning to predict how our robot will behave in a given scenario and greatly speed up the process of finding failure scenarios. This allows us to find very complex scenarios relatively quickly that break complex agents. 

Who could benefit from this research? Manufacturers of robots, self-driving cars, etc. to better test the agents before deploying them in the real world. Better testing leads to safer robots, which also benefits the end-users.

Understanding the impact of fake news and rumors

Counterfactual Neural Temporal Point Process for Estimating Causal Influence of Misinformation on Social Media 

Yizhou Zhang (University of Southern California); Defu Cao (University of Southern California); Yan Liu (University of Southern California) 

What is your paper about? The research helps us better understand how fake news and rumors change people’s ideas. We apply our model to a real-world dataset of social media posts and engagements about COVID-19 vaccines. The experimental results indicate that our model recognized an identifiable causal effect of misinformation that hurts people’s subjective emotions toward vaccines.

Who could benefit from this research? Journalists can use our models to understand how fake news and rumors change people’s ideas and then find better ways to clarify the misleading content. 

Accelerating MRI scans

HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction 

Zalan Fabian (University of Southern California); Berk Tinaz (University of Southern California); Mahdi Soltanolkotabi (University of Southern California) 

What is this paper about? In this work, the team proposes a deep learning algorithm that can reconstruct very high-quality MRI images from accelerated scans. MRI is one of the most popular and powerful medical imaging modalities. However, scans can take significantly longer than other diagnostic methods such as CT scans. Existing methods to accelerate MRI scans take fewer measurements of the body, which leads to degraded image quality. Modern data-driven AI techniques have been successfully deployed to reconstruct MR images from accelerated measurements, but their performance has plateaued in recent years. The team’s novel method combines the efficiency of traditional convolutional neural networks with the power of transformer-based architectures recently proposed for vision applications, establishing a new state-of-the-art in accelerated MRI reconstruction.

Who could benefit from this research? This method can aid radiologists and other medical doctors in two ways. First, it enables the reconstruction of very fine details on medical images that are potentially missed by other techniques, greatly improving the diagnostic value of such images. Second, as this method can recover high-quality images from accelerated measurements, the duration of MR scans can be greatly reduced. This may lead to more efficient utilization of the scanners and can bring down their high cost. Overall, the goal is to make MRI more reliable and efficient for everyone.

Training autonomous cars in changing environments

Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary Environments 

Liyu Chen (University of Southern California); Haipeng Luo (University of Southern California) 

What is your paper about? This paper is about how an agent can learn to behave optimally in a changing environment. Recently Waymo launched an autonomous ride service to Phoenix airport. Training an autonomous driving car can be framed as a goal-oriented reinforcement learning problem, which falls into the framework studied in this paper. Moreover, this paper studied a non-stationary changing environment, which is suitable for capturing changing traffic conditions. 

Learning about the world through language and vision

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks 

Tejas Srinivasan (University of Southern California)Ting-Yun Chang (University of Southern California)Leticia Leonor Pinto Alva (University of Southern California)Georgios Chochlakis (University of Southern California, ISI)Mohammad Rostami (University of Southern California)Jesse Thomason (University of Southern California)

What is your paper about? We establish a benchmark to study how models that consider both language and vision can learn tasks in sequence, such as answering open-ended questions about pictures versus answering yes/no questions about pairs of pictures. This benchmark also enables studying what happens when language or vision disappears, such as tasks like classifying whether movie reviews are positive or negative or identifying the salient object in an image. 

All papers

Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning 

Yejia Liu (University of California, Riverside); Wang Zhu (University of Southern California); Shaolei Ren (University of California, Riverside)

Near-Optimal No-Regret Learning Dynamics for General Convex Games 

Gabriele Farina (School of Computer Science, Carnegie Mellon University); Ioannis Anagnostides (Carnegie Mellon University); Haipeng Luo (University of Southern California); Chung-Wei Lee (University of Southern California); Christian Kroer (Columbia University); Tuomas Sandholm (Carnegie Mellon University) 

Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback 

Tiancheng Jin (University of Southern California); Tal Lancewicki (Tel Aviv University); Haipeng Luo (University of Southern California); Yishay Mansour (School of Computer Science, Tel Aviv University); Aviv Rosenberg (Amazon) 

Uncoupled Learning Dynamics with O(\log T) Swap Regret in Multiplayer Games 

Ioannis Anagnostides (Carnegie Mellon University); Gabriele Farina (School of Computer Science, Carnegie Mellon University); Christian Kroer (Columbia University); Chung-Wei Lee (University of Southern California); Haipeng Luo (University of Southern California); Tuomas Sandholm (Carnegie Mellon University) 

HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction 

Zalan Fabian (University of Southern California); Berk Tinaz (University of Southern California); Mahdi Soltanolkotabi (University of Southern California) 

Outlier-Robust Sparse Estimation via Non-Convex Optimization 

Yu Cheng (Brown University); Ilias Diakonikolas (University of Wisconsin, Madison); Rong Ge (Duke University); Shivam Gupta (University of Texas, Austin); Daniel Kane (University of California-San Diego); Mahdi Soltanolkotabi (University of Southern California) 

Self-Aware Personalized Federated Learning 

Huili Chen (University of California, San Diego); Jie Ding (University of Minnesota, Minneapolis); Eric William Tramel (Amazon); Shuang Wu (Amazon); Anit Kumar Sahu (Amazon Alexa AI); Salman Avestimehr (University of Southern California); Tao Zhang 

NS3: Neuro-symbolic Semantic Code Search 

Shushan Arakelyan (University of Southern California); Anna Hakhverdyan (National Polytechnic University of Armenia); Miltiadis Allamanis (Google); Luis Antonio Garcia (USC ISI); Christophe Hauser (USC/ISI); Xiang Ren (University of Southern California) 

Training Uncertainty-Aware Classifiers with Conformalized Deep Learning 

Bat-Sheva Einbinder (Technion – Israel Institute of Technology, Technion – Israel Institute of Technology); Yaniv Romano (Technion, Technion); Matteo Sesia (University of Southern California); Yanfei Zhou (University of Southern California) 

Conformal Frequency Estimation with Sketched Data 

Matteo Sesia (University of Southern California); Stefano Favaro (University of Torino) 

Why do We Need Large Batchsizes in Contrastive Learning? A Gradient-Bias Perspective 

Changyou Chen (State University of New York, Buffalo); Jianyi Zhang (Duke University); Yi Xu (Amazon); Liqun Chen (Duke University); Jiali Duan (University of Southern California); Yiran Chen (Duke University); Son Dinh Tran (University of Maryland, College Park); Belinda Zeng (Amazon); Trishul Chilimbi (Department of Computer Science, University of Wisconsin – Madison) 

Off-Policy Evaluation with Policy-Dependent Optimization Response 

Wenshuo Guo (University of California Berkeley); Michael Jordan (University of California, Berkeley); Angela Zhou (University of Southern California) 

ALMA: Hierarchical Learning for Composite Multi-Agent Tasks 

Shariq Iqbal (DeepMind); Robby Costales (University of Southern California); Fei Sha (University of Southern California) 

Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps 

Yue Hu (Shanghai Jiao Tong University); Shaoheng Fang (Shanghai Jiao Tong University); Zixing Lei (Shanghai Jiaotong University); Yiqi Zhong (University of Southern California); Siheng Chen (Shanghai Jiao Tong University) 

Empirical Gateaux Derivatives for Causal Inference 

Michael Jordan (University of California, Berkeley); Yixin Wang (University of California Berkeley); Angela Zhou (University of Southern California) 

Monocular Dynamic View Synthesis: A Reality Check 

Hang Gao (University of California Berkeley); Ruilong Li (University of Southern California); Shubham Tulsiani (Carnegie Mellon University); Bryan Russell (Adobe Research); Angjoo Kanazawa (University of California, Berkeley) 

Unsupervised Cross-Task Generalization via Retrieval Augmentation

Bill Yuchen Lin (University of Southern California); Kangmin Tan (University of Southern California); Chris Scott Miller (Dartmouth College); Beiwen Tian (Tsinghua University, Tsinghua University); Xiang Ren (University of Southern California)

Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback

Yan Dai (Institute for Interdisciplinary Information Sciences, Tsinghua University); Haipeng Luo (University of Southern California); Liyu Chen (University of Southern California)

Published on November 22nd, 2022

Last updated on November 28th, 2022

Want to write about this story?