Speaker: Jundong Li, University of Virginia
REMINDER: THIS TALK WILL NOT BE RECORDED/LIVE PRESENTATION ONLY
Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom.
If you are an outside visitor, please inform us at aiseminar DASH poc AT isi DOT edu beforehand so we will be aware of your attendance and let you in.
Join Zoom Meeting
Meeting ID: 704 285 0182
Graph machine learning (GML) models, such as graph neural networks, have proven to be highly effective in modeling graph-structured data and achieving remarkable predictive performance in various high-stake applications, including credit risk scoring, crime prediction, and medical diagnosis. However, concerns have been raised regarding the trustworthiness of GML models in decision making scenarios when fairness, transparency, and accountability are lacking.
To address these concerns, I will present our recent work on empowering GML for trustworthy decision making by focusing on three key aspects: fairness, explanation, and causality. First, I will discuss how to improve the fairness of GML from a data debiasing perspective. In particular, I will show how to measure data biases regarding different modalities of graph data and how to mitigate the data biases in a model-agnostic manner that can benefit different GML models. Second, I will show that explanation, as an effective debugging tool, not only can help us understand how the decisions are made but also could serve as a useful tool to diagnose how biases and discrimination are introduced in GML. Toward this goal, I will present a post-hoc structural explanation framework that can understand the unfairness issues of GML. Third, I will argue the emerging need to introduce causality for trustworthy decision making on graphs, as traditional GML could heavily rely on spurious correlations for making decisions. To bridge the gap, I will present a GML-based causal inference framework that aims to unleash the power of graph information for causal effect estimation. Finally, I will share my thoughts about the future plan, including discussing other fundamental research problems in GML and showcasing how GML can generate a broader societal impact.
Host:Zhuoyu Shi, POC:Maura Covaci
Visit links below to subscribe and for details on upcoming seminars:
Jundong Li is an Assistant Professor at the University of Virginia with appointments in the Department of Electrical and Computer Engineering, Department of Computer Science, and School of Data Science. Before that, he received his Ph.D. degree in Computer Science at Arizona State University in 2019. His research interests are generally in data mining and machine learning, with a particular focus on graph mining, causal inference, and trustworthy AI, and their applications in cybersecurity, healthcare, biology, and social science. As a result of his research work, he has published over 140 papers in high-impact venues such as KDD, WWW, NeurIPS, IJCAI, AAAI, SIGIR, WSDM, ACL, EMNLP, NAACL, CIKM, ICDM, SDM, ECML-PKDD, CSUR, TPAMI, TKDE, TKDD, and TIST, accumulating over 9,000 citations. He has won several prestigious awards, including KDD Best Research Paper Award (2022), NSF CAREER Award (2022), PAKDD Early Career Research Award (2023), JP Morgan Chase Faculty Research Award (2021 & 2022), Cisco Faculty Research Award (2021), and AAAI New Faculty Highlights (2021)