The NSF ACCESS Regional AI Workshop – SoCal Edition invites researchers, educators, and students from across Southern California who are using - or curious about using - AI and advanced computing in their work. Whether you’re part of the ACCESS program, exploring NAIRR resources, or simply interested in practical AI tools and workflows, this free one-day, in-person event is for you.

The NSF ACCESS Regional AI Workshop was held on January 28th, 2026 in Los Angeles, CA. Please click the button below to view photos from the event.

View Photo Gallery

This ACCESS-Support led workshop will include presentations on the use of AI for research and education and provide an overview of NAIRR-Pilot, connecting practitioners in the Southern California region using the NAIRR-Pilot ecosystem. It will explore how to make the most of NAIRR allocations, highlight practical tools and workflows, and share strategies for advancing research across disciplines with AI. Participants will gain insights into best practices, hear about success stories from the community, and connect with peers to exchange ideas and foster collaboration.

The NAIRR-Pilot program is NSF’s flagship program about giving access to commercial and academic CI resources to researchers looking to conduct research in AI or applying AI to their science or education.

This workshop offers a unique opportunity to strengthen your AI skills, broaden your network, and become part of the growing regional AI community. The workshop will provide an opportunity to present lightning talks or posters.

This is an application to attend. Space is limited to 100 participants. If there are more applications than space, applications will be selected based on the responses in the applications.

Applications are now closed

How it Started

In April 2025, NAIRR held “AI Unlocked: Empowering Higher Education through Research and Discovery” in Denver, Colorado with about 350 attendees. Based on the success of the workshop, it was decided to hold NAIRR smaller regional focused workshops limited to about 100 attendees.The first one was RMACC (see agenda here) in Colorado in August 2025. A second workshop was hosted in Kentucky early October 2025. USC/ISI is organizing the Southern California Region workshop in January 2026.

Agenda

Time Topic
8:00 - 9:00 am Check in and breakfast
9:00 - 9:10 am
Welcome - Ewa Deelman, University of Southern California
9:10 - 10:40 am AI on Campus
9:10 - 9:40 am
How Generative AI Is Reshaping Learning, Agency, and Equity in Higher Education Worldwide, Stephen J. Aguilar, University of Southern California
View Presentation View Recording

Abstract

This talk draws on international, large-scale research to examine how students and educators in higher education are using generative AI as a tool for learning, help-seeking, and decision-making. I distinguish between instrumental uses of AI that support agency and understanding and executive uses that risk displacing human judgment, and I show how institutional context and policy shape these patterns across countries. The talk concludes with implications for designing AI-enabled higher education that strengthens, rather than substitutes for, human intelligence.

Presenter

Stephen J. Aguilar

Stephen J. Aguilar, University of Southern California

Dr. Stephen J. Aguilar is an Associate Professor of Education at the USC Rossier School of Education and co-leads USC’s Center for Generative AI and Society. His research focuses on investigating how educational technologies influence teaching, learning, and motivation.

His work has been funded by the National Science Foundation, the American Educational Research Association (AERA), the National Institutes of Health, and the U.S. Army Research Office. Dr. Aguilar has been guest on NPR’s AirTalk, and has been interviewed by the Los Angeles Times, The New York Times, USA Today, The Atlantic, Bloomberg, and The Washington Post on the topic of generative AI’s effects on education.

9:40 - 10:10 am
Artificial Intelligence’s Transformative Research Methods and Techniques in the Digital Humanities, Danielle Mihram, University of Southern California
View Presentation View Recording

Abstract

The term “artificial intelligence” was coined in 1956 by John McCarthy, a Dartmouth College professor, at the Dartmouth Summer Research Project on Artificial Intelligence (June 18-August 17, 1956). Very early computational methods in the Digital Humanities (DH) primarily focused on text analysis using tools for concordances, lexical statistics, and stylometry. These methods and techniques were pioneered by Roberto Busa’s Project, Index Thomisticus (a concordance to 179 texts centering around Thomas Aquinas) which was begun in the 1940’s. Since then, projects can be traced back to the 1960s and 1970s and they include additional key methods and techniques such as early forms of text encoding and markup for creating scholarly editions and the analysis of language evolution through word usage and grammatical patterns. The extensive integration of Artificial Intelligence (AI) into the field of Digital Humanities (DH) began in the late 1990s and early 2000s as computational power increased, and this integration grew significantly, becoming a central part of DH by 2020 due to advancements in AI techniques like Natural Language Processing (NLP), machine learning, and image recognition, which allow for the analysis of large datasets that would be impractical for manual methods.

These advancements can be seen as a pivotal research event, signaling a transformation in the approaches to studying human culture and history, and it is reshaping the traditional ways in which we conduct research, analyze information, and share insights. AI enables researchers to analyze large amounts of data and uncover patterns and insights at speeds previously unattainable, allowing for the creation of more dynamic ways to discover and present historical and cultural content to a potentially broader audience. In this presentation we shall take a look at key techniques and methods currently used in AI-focused research in the Digital Humanities and we shall examine illustrative case studies.

Presenter

Danielle Mihram

Danielle Mihram, University of Southern California

Danielle Mihram is a University Librarian (rank equivalent to Full Professor) at the University of Southern California [USC] Libraries where she has been a faculty member since 1989. Prior to USC, she was a member of the faculty of several academic institutions, including the University of Sydney (Australia), Swarthmore College, Haverford College, the University of Pennsylvania, and New York University. She holds a B.A. Honors from the University of Sydney; a Ph.D. from the University of Pennsylvania; and a Master of Library Science (MLS) from Rutgers University. Since her arrival at USC Libraries, she has held several high-level administrative positions. In 1996 she was appointed as the first full-time Director of USC’s Center for Excellence in Teaching [CET] (Provost Office; from 1996 to 2007) in view of her many years of teaching and mentoring experience, as well as her knowledge of information science. She remains a member of CET as one of its Distinguished Faculty Fellows. Danielle's research interests are multidisciplinary, and they have led to over a hundred publications and presentations. Her current research interests focus on the contributions of the digital humanities to the advancement of human knowledge and the transformative effects of artificial intelligence in research and scholarship. She was awarded several USC grants, as well as two USC Libraries’ Research Funds, the latter resulting in her leading two Digital Humanities Projects: USC Digital Voltaire (2017) and USC Illuminated Medieval Manuscripts (work in progress). She is the recipient of several awards: The Outstanding Scholarly Achievement Award (2003) and the Innovation Award on Teaching and Research (2005), both from the International Institute for Advanced Studies in Systems Research and Cybernetics (Baden-Baden, Germany); the USC Mellon Award for Excellence in Mentoring (2005); and the USC Academic Senate’s Distinguished Faculty Service Award (2008).

10:10 - 10:40 am
AI for All - Nabeel Alzahrani, California State University, San Bernardino
View Presentation View Recording

Abstract

AI for All is an introduction to Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Generative AI (genAI), and Large Language Models (LLMs). The session highlights real-world applications and ethical considerations, empowering both STEM and non-STEM audiences to engage thoughtfully with AI technologies. Participants will gain a foundational understanding of key AI concepts, explore how AI is transforming fields such as education and healthcare, and discuss critical issues of fairness, transparency, and bias.

Presenter

Nabeel Alzahrani

Dr. Nabeel Alzahrani, California State University, San Bernardino (CSUSB)

Dr. Alzahrani is an adjunct professor of Computer Science and Engineering at California State University, San Bernardino (CSUSB), specializing in artificial intelligence (AI), high-performance computing (HPC), and cybersecurity. He earned his Ph.D. in Computer Science from the University of California, Riverside. Dr. Alzahrani also serves as a consultant in the Identity, Security, and Enterprise Technology Department at CSUSB. He is the co-founder of the Artificial Intelligence, Quantum Computing, Fusion Energy, and Semiconductors (AQFS) Research and Training Lab at CSUSB. In addition, he is a published author of books and research papers and has delivered numerous presentations in his field.

10:40 - 11:00 am Break
11:00 - 12:30 pm AI Resources
11:00 - 11:30 am
Introduction to NAIRR and ACCESS - Empowering Research and Education with Advanced Computing Resources, Shelley Knuth, University of Colorado Boulder
View Presentation

Abstract

This talk will go over the resources available to the research community as part of the National Artificial Intelligence Research Resource (NAIRR) Pilot and the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) projects.

Presenter

Shelly Knuth

Shelly Knuth, University of Colorado Boulder

Shelley is the Assistant Vice Chancellor for Research Computing at the University of Colorado Boulder. She oversees advanced computing and data services that support researchers nationwide, including supercomputing, large-scale data storage, secure enclaves, and high-speed networking. She also serves as Executive Director of the Center for Research Data and Digital Scholarship (CRDDS) and chairs the Rocky Mountain Advanced Computing Consortium (RMACC), fostering collaboration across the region.

Shelley is the lead principal investigator for the NSF-funded ACCESS Support project and contributes to several other NSF initiatives. Additionally, she helps guide national strategy as co-lead of the User Experience Working Group for the National Artificial Intelligence Research Resource (NAIRR) pilot.

She earned her PhD in Atmospheric and Oceanic Sciences from CU Boulder in 2014.

11:30 - 12:00 pm
Getting Access to NAIRR Pilot Resources, Maytal Dahan, University of Texas at Austin
View Presentation View Recording

Abstract

This talk guides participants through the process of accessing resources from the National AI Research Resource (NAIRR Pilot), emphasizing preparation, selection, and proposal submission. Key topics include:

  1. Preparation for Submitting a Proposal:
    • Defining the project scope and running test simulations using a sandbox to identify resource needs.
    • Evaluating computational requirements (e.g., CPU/GPU, memory) and necessary applications based on preliminary tests.
  2. Matching Resources:
    Exploring computational resources and determining the best match for specific project needs.
  3. Submitting an Allocation Request:
    Step-by-step demo with guided, hands-on practice.
  4. Support and Guidance:
    Leveraging office hours, ticket systems (NAIRR Pilot or Resource Provider), and consultations for personalized assistance.

Presenter

Maytal Dahan

Maytal Dahan, University of Texas at Austin

Maytal Dahan is the Director of Advanced Computing Interfaces (ACI) at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. She leads efforts to design and deploy cyberinfrastructure platforms and science gateways that broaden access to computing and data for a wide range of research communities. With over two decades of experience in software engineering and research computing, Maytal has been a key contributor to projects such as Tapis, SGX3, and XSEDE and more.

12:00 - 12:30 pm
AI Infrastructure for All - Frank Würthwein, San Diego Supercomputer Center
View Presentation

Abstract

The National Research Platform (NRP) provides a national scale AI infrastructure for education and research that enables researchers and their institutions to own their own AI infrastructure without having to operate it. It provides AI infrastructure management across more than 100 data centers today. The user interfaces NRP offers include Jupyter Notebooks, LLM chat and API access, the native Kubernetes API, the National Data Platform UI/UX, and HTCondor via NRP integration with the OSPool managed by PATh. Dozens of colleges nationwide use the platform to bring digital assets into the classroom, including data, compute, and AI tools.

We will give an overview what the NRP provides to students, educators, researchers, and institutions, including a “walk through” the training materials, and other support mechanisms for people to get started.

Presenter

Frank Wurthwein

Frank Würthwein, San Diego Supercomputer Center

Frank Würthwein is the Director of the San Diego Supercomputer Center. He holds faculty appointments at UC San Diego in the Physics Department and the Halıcıoğlu Data Science Institute. After receiving his Ph.D. from Cornell in 1995, he held appointments at Caltech, MIT and Fermi National Laboratory, before joining the UC San Diego faculty in 2003. His research focuses on globally distributed compute and data systems (e.g., OSG, NRP, OSDF), experimental particle physics and distributed high-throughput computing. As an experimentalist, he is interested in instrumentation and data analysis. In the last couple decades, this meant developing, deploying and operating worldwide distributed computing systems that support processing and analysis of large data volumes. In 2010, "large" data volumes were measured in Petabytes. By 2030, they are expected to grow to Exabytes.

12:30 - 1:30 pm Lunch
1:30 - 2:30 pm Lightning Talks: What Can You Do With AI? (10 min talks, 5 min Q/A)
1:30 - 1:45 pm
Too Smart to be Human: Can AI Agents Replace Us in Behavioral Experiments? - John Garcia, California Lutheran University
View Presentation View Recording

Abstract

Can AI replace human subjects? Researchers are increasingly using models such as GPT-4 as surrogates for humans because they are cheaper and faster; however, do they behave like us? To find out, I built 96 AI "retail investors" and unleashed them in a stock market simulation, exposing them to viral "meme stock" buzz while holding financial fundamentals constant. The results were striking: When human retail investors see viral hype, they buy (+30–50%); my AI retail investor agents did the opposite, decreasing buying by 45%. While humans famously hold on to losing investments for too long, my agents sold losers three times faster than they sold winners. They acted exactly like financial textbooks say we should, and exactly unlike real people do. I call this "Hyper-Rationality." AI models are trained on vast amounts of advice: "avoid bubbles," "cut your losses." They prioritize logical training over character instruction; even when explicitly programmed to experience "FOMO," they calculated the transaction costs and rationally refrained from trading. The implication: AI can simulate how we should behave, but it lacks the emotional software to replicate how we actually behave.

1:45 - 2:00 pm
Deepfakes, Data, and Democracy: Artificial Intelligence in Political Life - Michael Ault, California State University, Bakersfield
View Presentation View Recording

Abstract

I explore how artificial intelligence is transforming politics and political communication, from government regulation and global power struggles to the future of democracy itself. I examine historical efforts to regulate disruptive technologies alongside contemporary debates over AI policy in the U.S., Europe, and China. Through case studies of recent elections, I also investigate how AI tools (i.e., from data analytics to deepfakes) are reshaping campaigns, media narratives, and voter trust. Ethical challenges such as surveillance, bias, and accountability are also analyzed alongside questions of global competition and control. Overall, I seek to critically assess who governs in the age of algorithms and what that means for justice, democracy, and political power.

2:00 - 2:15 pm
AI Agents - Prakashan Korambath, University of California, Los Angeles
View Presentation View Recording

Abstract

AI agents represent a significant evolution beyond traditional chatbots and simple question-answering systems. They aren't merely delivering static information; they are dynamic entities that can reason, act, and collaborate to solve complex problems and automate tasks, often with minimal or no human intervention. This shift is powered by their ability to leverage external tools in the form of APIs that provide access to dynamic, real-world information. By bridging knowledge gaps and generating new insights, AI agents are poised to fundamentally change how we interact with technology and automate workflows across every industry. Also, tools developed by different model providers can interact well using Model Context Protocol (MCP) with client server architecture to enhance usage of Agentic AI concepts in real time and real data.

2:15 - 2:30 pm
AI-Driven Framework for Personalized Insulin Dosing and Safer Diabetes Management - Yash Kishorbhai Pansheriya, California State University, Northridge (CSUN)
View Presentation View Recording

Abstract

Type 1 Diabetes (T1D) management requires frequent monitoring and decision-making, but traditional insulin dosing formulas are static and often fail to adapt to real-world factors such as delayed meals, residual insulin effects, or changes in physical activity. This project presents a machine-learning–based decision support framework for short-term glucose prediction and insulin dosing explanation.The system integrates continuous glucose monitoring data with estimates of insulin-on-board, carbohydrate intake, and activity to predict near-term glucose levels and classify glycemic risk zones. Using these predictions, clinically grounded rule-based logic provides context-aware dosing guidance while enforcing safety constraints such as hypoglycemia prevention and insulin stacking avoidance.

To improve interpretability, a Retrieval-Augmented Generation (RAG)–based large language model is incorporated as an interactive interface that explains system decisions using medical guidelines, without modifying dosing outputs. The talk discusses the modeling pipeline, feature engineering, and the role of explainable AI in diabetes decision support."

2:30 - 4:15 pm AI Ready Data
2:30 - 3:00 pm
Sage Grande: An AI Testbed for Edge Computing - Pete Beckman, Northwestern University
View Presentation View Recording

Abstract

Sage Grande is national-scale cyberinfrastructure designed for AI-driven edge computing. With more than 100 nodes deployed across diverse environments—from Chicago’s urban streets to national parks—Sage enables students and scientists to develop and deploy AI applications in the field. By integrating sensors such as cameras, microphones, and LiDAR with AI-driven computation, researchers can build novel systems for tasks like wildfire detection, agricultural monitoring, bioacoustic analysis, and understanding urban dynamics.

Presenter

Pete Beckman

Pete Beckman, Northwestern University

Pete Beckman is a recognized global expert in high-end computing systems. During the past 25 years, he has designed and built software and architectures for large-scale parallel and distributed computing systems. Peter helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, he acted as vice president of Turbolinux’s worldwide engineering efforts.

Pete joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation.

He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and co-Director of the Northwestern Argonne Institute of Science and Engineering. He is also a co-founder of the International Exascale Software Project (IESP).

3:00 - 3:30 pm
From Promise to Practice: Reimagining Resilient Agriculture Through AI - Nirav Merchant, University of Arizona
View Presentation View Recording

Abstract

The agricultural sector is rapidly integrating field automation and sensors. Simultaneously large-scale citizen science and global observatories are generating vast, harmonized datasets, including images of insects and weeds. Along with advances in AI methods, these extensive data sources present immense opportunities for applying precision agriculture and making data-driven decisions for growers and plant breeders. This approach is essential for cultivating resilient crops in the face of increasingly severe and rapidly changing climatic conditions. However, the widespread adoption of the necessary tools and techniques to fully realize this potential is currently hindered by several barriers from lack of open AI models to ease of use for AI powered automation. The National AI Institute for Resilient Agriculture (AIRRA) is actively working to overcome some of these challenges.
This talk will detail a recent effort focused on developing an open multimodal foundation model. This model is specifically designed to identify insects, weeds, and integrate agricultural best practices, thereby assisting growers and breeders in effectively managing and mitigating harm from pests and invasive species. Furthermore, I will discuss strategies for utilizing powerful computational resources, available through NSF ACCESS, NAIRR, and mature software tools from NSF-funded projects, in combination with institutional and commercial infrastructure. This integrated approach is critical for achieving a level of performance and scale that has traditionally been out of reach for most academic research teams.

Presenter

Nirav Merchant

Nirav Merchant, University of Arizona

Nirav Merchant serves as the Director of the Data Science Institute. For the past three decades at the University of Arizona, his research has been focused on the development of scalable computational platforms (cyberinfrastructure) in support of open science projects. His work is primarily directed towards reducing the socio-technical barriers in adoption of emerging computational and information sciences advances by domain sciences.

His interests encompass large-scale data management platforms, data delivery technologies, cloud native methodologies, secure data analysis enclaves, and the use of managed sensors and wearables for health interventions. He is passionate about developing learning material for informed adoption and utilization of Machine Learning (ML) and Artificial Intelligence (AI) based analysis methods into course work and for workforce development.

He serves as the principal investigator for NSF CyVerse, a national scale Cyberinfrastructure and Co-principal investigator for NSF Jetstream the first user-friendly, scalable cloud environment for NSF XSEDE/ACCESS. He leads the cyberinfrastructure team for the NSF & USDA funded National Artificial Intelligence Institute for Resilient Agriculture (AIIRA)

3:30 - 3:45 pm Break
3:45 - 4:15 pm
Generative Artificial Intelligence and Deep Learning Using NAIRR Reveal Brain Aging Trajectories Before Alzheimer's Disease - Andrei Irimia, PhD, University of Southern California
View Presentation View Recording

Abstract

Understanding why individuals age differently at the level of the brain is a central question in neuroscience and medicine. Our research leverages large-scale neuroimaging datasets and artificial intelligence to quantify the pace and pattern of brain aging from structural MRI. Using deep learning models trained on thousands of MRI scans, we estimate “brain age” as a personalized biomarker of neural health. These measures reveal that accelerated brain aging predicts a higher risk of progression from normal cognition to impairment, whereas slower brain aging confers resilience. Regional brain aging patterns, identified through interpretable AI, further distinguish those at risk for Alzheimer’s disease and related dementias. We also integrate multimodal data to examine how chronic conditions—such as cardiovascular disease, metabolic disorders, and traumatic brain injury—as well as women’s health factors like menopause and reproductive history, shape the trajectory of brain aging. This work illustrates how AI-driven neuroimaging analytics can inform individualized risk stratification, preventive strategies, and ultimately precision aging research.

Presenter

Andrei Irimia

Andrei Irimia, PhD, University of Southern California

Andrei Irimia, PhD is an associate professor in the Leonard Davis School of Gerontology at the University of Southern California, with courtesy appointments in biomedical engineering and quantitative biology. His research focuses on brain aging, traumatic brain injury, and Alzheimer’s disease, using advanced neuroimaging and quantitative methods to understand individual variability in aging trajectories and dementia risk. Dr. Irimia leads several NIH-funded studies examining how chronic disease variables and women's health factors influence brain aging and neurodegeneration. His work bridges population neuroscience and clinical neurology, with the goal of improving early detection and stratification of patients at risk for cognitive decline.

4:15 - 5:00 pm
Focus Demo: Pegasus Workflow Management System - Karan Vahi and Mats Rynge, University of Southern California
View Presentation View Recording

Abstract

Pegasus WMS (Workflow Management System) streamlines the execution of complex AI and machine learning workloads by automating the end-to-end pipeline from data ingestion to model evaluation. Through ACCESS Pegasus, researchers can utilize a hosted workflow environment that simplifies the orchestration of jobs across distributed national cyberinfrastructure. This platform allows users to leverage pre-configured Jupyter Notebook examples and the Pegasus Python API to design reproducible AI workflows.

To optimize the use of specialized hardware, Pegasus utilizes glideins (pilot jobs) to provide a unified overlay over GPU resources. This abstraction layer allows the workflow manager to treat diverse, distributed compute nodes as a single, coherent pool of resources. By deploying these pilot jobs, Pegasus can dynamically provision and manage high-performance GPU environments, enabling AI workloads to scale across multiple clusters while maintaining consistent performance and reducing the overhead typically associated with manual resource allocation.

5:00 - 5:05 pm Closing Remarks - Ewa Deelman, University of Southern California
5:05 - 7:00 pm Social Mixer / Poster Session

Posters

A Multi-Task Benchmark for Detection, Segmentation, and Tracking in Aerial Videos - Sedat Ozer, California State Polytechnic University, Pomona

Drone (aerial) vision remains a critical research topic across multiple application domains. Existing public aerial datasets predominantly provide bounding box annotations and largely lack dense, instance-level segmentation and long-term object tracking labels. While object detection in aerial imagery is increasingly well studied, joint segmentation and tracking in aerial videos remain underexplored due to the scarcity of richly annotated datasets. We present a novel UAV-based video benchmark that enables joint object detection, instance segmentation, and multi-object tracking in densely populated aerial scenes.

ACOSUS - An AI-driven Counseling System for Transfer Students - Sherrene Bogle, Cal Poly Humboldt

Underrepresented transfer students face multifaceted academic, personal, and environmental challenges that are often insufficiently addressed by traditional university counseling systems. This project develops ACOSUS, an AI-driven student counseling system designed to complement existing advising by providing personalized readiness assessments, success predictions, and actionable recommendations.

ACOSUS extends current counseling approaches in two key ways: (1) by integrating personal, behavioral, and environmental factors with academic performance data, and (2) by delivering individualized assessments of students’ preparedness for academic upskilling and job market success. The system employs natural language processing, deep neural networks, and other AI techniques, while incorporating bias-aware data curation methods to mitigate racial and gender biases common in AI-driven systems.

The project consists of three thrusts: Thrust 1 conducts cognitive studies using surveys and interviews to identify factors influencing transfer decisions; Thrust 2 analyzes social media influences on transfer and career choices; and Thrust 3 integrates these findings to design, implement, and evaluate ACOSUS.

Advancing NLP for Non-Latin Scripts and Languages - Adrianna Tan, Future Ethics

Are We Leaving Non-Latin Scripts and Languages Behind?

The vast majority of NLP research and Large Language Models (LLMs) are focused on high-resource languages, predominantly those using the Latin script (e.g., English, French). This creates a critical gap, leading to performance disparities, systemic biases, and the exclusion of billions of speakers from the benefits of advanced AI. The disproportionate focus on Latin scripts means biases and harms are often not adequately measured or addressed for non-Latin script users. Future work must be linguistically informed and strategically address the resource and structural gaps in order to bridge the gap, so that AI can serve billions of people more safely. In my poster presentation, I am actively seeking bilingual collaborators who are proficient in English and another language/script (especially those non-Latin scripts like Arabic, Hindi, Korean, Japanese, or others) to work on practical tools and research in this area.

AI-Driven Framework for Personalized Insulin Dosing and Safer Diabetes Management - Yash Kishorbhai Pansheriya, California State University, Northridge

Type 1 Diabetes (T1D) management requires continuous monitoring and frequent decision-making under uncertainty, yet conventional insulin dosing formulas are static and often fail to account for dynamic factors such as meal timing variability, residual insulin activity, and physical activity.This work presents a machine-learning–based decision support framework for personalized short-term glucose prediction and insulin dosing explanation. The system integrates continuous glucose monitoring data with estimates of insulin-on-board, carbohydrate intake, and activity levels to forecast near-term glucose trajectories and classify glycemic risk zones. Based on these predictions, the framework applies clinically grounded rule-based logic to generate context-aware insulin dosing guidance, while explicitly enforcing safety constraints such as insulin-on-board and hypoglycemia risk.

AI-Driven Molecular Structure Determination from Ultrafast X-ray Scattering - Roya Moghaddasi Fereidani, University of California San Diego

Understanding molecular structure and dynamics in real time is one of the grand challenges of modern physical chemistry. My research integrates artificial intelligence with quantum molecular simulations to reconstruct molecular structures directly from ultrafast x-ray scattering patterns. While forward simulations of x-ray scattering from known geometries are well established, solving the inverse problem—inferring atomic configurations from measured patterns—remains highly challenging. To address this, I am developing supervised machine-learning models, including convolutional and graph neural networks, trained on first-principles simulations to learn the mapping between scattering patterns and molecular geometries. Once trained, these models can rapidly predict transient molecular structures and distinguish between competing reaction pathways, providing an efficient alternative to traditional ab initio molecular dynamics. This AI-driven framework aims to accelerate the creation of molecular “movies” at femtosecond timescales, opening new possibilities for understanding and controlling photochemical reactions.

Autonomous Self-Healing Memory Systems for Energy-Efficient and Reliable Computing - Marjan Asadinia, California State university, Northridge

Emerging non-volatile memory technologies such as Phase-Change Memory (PCM) offer high density and scalability, but they face critical challenges related to high write energy, long write latency, and limited endurance caused by frequent bit transitions and write-disturbance errors. These limitations motivate the development of self-healing memory systems that can autonomously adapt to workload behavior and mitigate reliability degradation over time. This work presents a machine learning–driven self-healing memory framework that combines adaptive write optimization with proactive error prediction. By analyzing data patterns and write characteristics, the system intelligently reduces unnecessary bit transitions during write operations, leading to lower energy consumption and improved memory lifetime. In parallel, learning-based error prediction models are used to identify error-prone memory regions before failures occur, enabling early intervention through selective rewriting, remapping, or correction.The proposed approach allows the memory system to continuously monitor its state and dynamically adjust its behavior in response to evolving error patterns and workload demands. Experimental evaluation using full-system, cycle-accurate simulation demonstrates notable reductions in write energy and error rates with minimal performance overhead. These results illustrate how integrating machine learning into memory management enables resilient, efficient, and autonomous self-healing behavior for future memory systems.

Data-Driven Mobility Laws for Σ3{112} Incoherent Twin Boundaries in Ni–Cr Alloys - Yixi Shen, UC Santa Barbara

Nanotwinned (NT) materials can lose their NT architecture during heating, often through detwinning driven by migration of Σ3{112} incoherent twin boundaries (ITBs). We use molecular dynamics (MD) to quantify ITB migration in Ni–Cr alloys, resolving how composition, interatomic potential, and local Cr distributions jointly control ITB energy and kinetics. The simulations reveal strong configuration-to-configuration variability—most pronounced in Ni70Cr30—and a temperature-dependent transition from planar to stepwise migration in Ni70Cr30 and Ni80Cr20 that is absent in pure Ni and Ni90Cr10. While Arrhenius trends capture parts of the behavior, the data exhibit regime changes and velocity plateaus that motivate a data-driven mobility law. Building on these MD results, we train a machine-learning surrogate that predicts log⁡10(vITB)log10(vITB) from temperature, composition, and local chemical descriptors, achieving ~0.30 RMSE in out-of-fold testing across distinct chemical configurations. This ML-accelerated mobility model enables rapid screening of ITB kinetics and provides efficient inputs for multiscale phase-field simulations aimed at predicting NT thermal stability in chemically complex Ni-based alloys.

Deep Learning for Gene-Environment Interaction Analysis of Complex Traits - Jessica George, University of Southern California

Many complex traits and diseases arise from the interactions between genetics factors and environmental exposures, commonly referred to as gene-environment (G×E) interactions. Accurately modeling these effects is important for predicting individual risk and understanding sources of trait variability, but it remains challenging due to nonlinear effects and high-dimensional feature spaces. Traditional regression-based approaches typically require interactions to be specified in advance, limiting their ability to capture complex relationships. We present a deep learning (DL) approach for predictive modeling of G×E effects that explicitly learns nonlinear 2-way and higher-order interactions directly from data, including genotype dominance effects (i.e., non-additive genetic contributions). The proposed model is a feed-forward, fully-connected neural network that takes genetic and environmental features as inputs and predicts a single outcome, including a quantitative trait, disease status, or survival phenotype. We benchmark this approach against widely used statistical and machine learning methods, including linear and penalized (LASSO, elastic-net) regression, random forest, gradient boosting (LightGBM), and a tabular prior-data fitted network (TabPFN, an alternative DL approach based on a pre-trained foundation model). Using a controlled simulation study with 100 replicated datasets of 10,000 individuals, all models were fit using main effects only, with genetic variables coded additively and no interaction terms provided. Linear regression was additionally fit under a “gold standard” specification that included main effects, G×E interactions, and appropriate dominance modeling, serving as a reference upper bound on achievable performance. Prediction accuracy was evaluated using R2 across increasing levels of interaction complexity. Under the main-effects-only specification, regression-based models achieved limited predictive performance (R2 ≈ <0.01-0.20), particularly as interaction complexity increased. In contrast, DL and boosting models achieved substantially higher R2 values in moderate-to-high complexity settings (DL: R2 ≈ 0.21-0.28; boosting: R2 ≈ 0.20-0.27), reflecting their ability to learn nonlinear and interaction-driven signal. TabPFN achieved the highest predictive performance across all complexity levels (R2 ≈ 0.16-0.30), consistently outperforming both regression-based and alternative machine learning approaches. As expected, the gold standard linear regression model yielded the highest overall R2, providing an upper bound on attainable performance. These results demonstrate the advantages of modern machine learning approaches for prediction in settings dominated by complex relationships. Ongoing work extends these methods to real-world genomic datasets to assess scalability, robustness, and practical impact.

Fairness and generalizability of a machine learning-based model for PAD risk detection across California health systems - Karoline Kallis, University of California, San Diego

Background
Peripheral artery disease (PAD) is a major cause of cardiovascular events but remains underdiagnosed. Machine learning models leveraging electronic health record (EHR) data offer promise for early PAD detection, however efforts to develop models that are generalizable and fair across diverse populations are limited.

Objectives
We aimed to develop and evaluate an optimized machine learning model for PAD risk detection across five University of California (UC) health systems, assessing performance and fairness across different subgroups.

Methods
We used the UC Health Data Warehouse to identify 31,936 patients with PAD and 31,936 matched controls. Cases were defined by age ≥40, at least two PAD-related codes ≥30 days apart, ≥2 encounters between 2014 and 2024, and complete demographic data. To capture disease heterogeneity, we compared PAD populations across five UC health systems and applied unsupervised k-means clustering to define phenotypic subgroups. A light gradient boosting machine classifier was trained on 43,555 features including demographics, comorbidities, medications, laboratory values, healthcare utilization, and diagnosis, procedure, and medication codes. Model performance was evaluated overall and across demographic subgroups using area under the receiver operating characteristic curve (AUROC) and precision recall (PR) curves, F1 score, true positive rate (TPR), and false positive rate (FPR). Fairness was assessed using demographic parity and equalized odds ratios.

Results
The model achieved consistent performance across institutions (AUROC 0.77-0.79; AUC-PR 0.77-0.8) with well-calibrated classifications. Performance was stable across genders, with modest variation by race and age. Discrimination was lowest in patients >80 years (AUROC 0.72) and highest in those aged <50 year (AUROC 0.82). The PAD cohort was heterogeneous in demographics, comorbidity burden, and healthcare utilization, with the strongest variation in performance observed by healthcare utilization and comorbidity burden.

Conclusions
PAD detection from EHR data is feasible and generalizable across diverse health systems. Unsupervised phenotypic clustering identified systematic sources of heterogeneity, providing a framework for population health efforts.

Hierarchical Semantic Memory Transformer (H2MT) - Maryam Haghifam, University of California, Los Angeles

Transformer-based large language models (LLMs) are used in language processing, yet when handling long context most often restrict the context window. Furthermore, many existing solutions are inefficient and overlook the structure inherent to documents. As a result, long-context models often treat text as a flat token stream, which obscures hierarchy and wastes computation by processing both relevant and irrelevant context. We present Hierarchical Semantic Memory Transformer (H2MT), a semantic hierarchy-aware approach that attaches to a backbone model. H2MT represents a document as a tree and performs level-conditioned routing and aggregation. It first propagates memory embeddings (summary vectors produced by the backbone) upward. Thus, child-node memory embeddings are injected into their ancestors to preserve relative context. Finally, the model applies cross-level attention to retrieve related information. H2MT improves quality at similar model size while reducing long-range attention compute and memory. The approach is most helpful for data with a semantic hierarchy that can be modeled as a tree. It uses less memory and fewer parameters.

Mechanistic Insights into CO₂ Hydrogenation to Methanol over Inverse ZrO₂/Cu Catalysts - Zihan Yang, University of California, Los Angeles

Inverse ZrO2/Cu shows extraordinary catalytic performance converting CO2 to methanol, yet uncertainties still exist in the reaction mechanism. While conventional Cu/ZrO₂ systems often exhibit rate determining step at the formate hydrogenation, evidence for inverse ZrO₂/Cu catalysts has been conflicting. In this work, we employ density functional theory (DFT) calculations to investigate CO₂ hydrogenation reaction across an ensemble of inverse ZrO₂/Cu configurations under reaction conditions. Detailed reaction-pathway analysis reveals that all the studied inverse structures display the rate-determining step after methoxy formation, typically in hydrogenation to methanol or subsequent water formation, rather than at formate hydrogenation. Structural sensitivity is pronounced, only 19% of the catalyst ensemble are catalytically active across the full pathway, with reactivity favored by partially reduced Zr clusters and reactive site near metallic Cu surface that enhances hydrogen dissociation. Simulated reaction mechanism aligns qualitatively and quantitatively with experimental trends, supporting the view that the inverse configuration mitigates formate stabilization and shifts the reaction kinetic bottleneck to later steps in the mechanism, after formation of methoxy intermediate. These findings clarify the mechanistic origins of activity in inverse ZrO₂/Cu catalysts and highlight the importance of structural ensembles in governing CO₂ hydrogenation performance.

RadVision - Radiology Visual Question Answering Model - Harsh Toshniwal, University of Southern California

Radiology plays a vital role in modern healthcare, generating a vast number of medical images that still rely on expert interpretation. Developing automated systems that can understand both medical images and clinical questions can help speed up diagnosis and ensure consistency in clinical decision-making. We developed a model that can analyze X-ray images and answer clinical questions in natural language. The model handles both closed-ended (yes/no) and open- ended (descriptive) question s by combining visual and textual features, enabling reasoning about the content of an image in context. These fused features are used to predict answers across different question types. The model was trained and evaluated on the VQA-RAD dataset, demonstrating its ability to provide consistent, context-aware support for clinical decision-making

SPHERE: A Global Testbed for Reproducible AI, Cybersecurity, and Privacy Research - David Balenson, USC Information Sciences Institute

The Security and Privacy Heterogeneous Environment for Reproducible Experimentation (SPHERE) is an NSF Mid-Scale Research Infrastructure project led by USC Information Sciences Institute, Northeastern University, and the University of Utah. SPHERE provides a rich, representative research environment with user-configurable hardware, software, and network resources across six specialized enclaves, including general-purpose, machine learning, IoT, cyber-physical systems (CPS), embedded compute, and programmable networking. SPHERE supports reproducible security and privacy research and is exploring AI-driven experiment design, orchestration, and analysis. The project is also exploring integration with national infrastructure programs such as NSF ACCESS and NAIRR to broaden accessibility and collaboration. By combining secure, scalable infrastructure with community engagement, SPHERE aims to advance trustworthy, transparent, and reproducible research at the intersection of AI, cybersecurity, and privacy.

Surrogate Models for Earthquake Dynamic Rupture and Subduction Zone Temperature - Gabrielle Hobson, University of California, San Diego

Physics-based simulations of earthquake dynamic rupture and subduction zone evolution are computationally challenging, limiting our ability to explore high-dimensional parameter spaces. We build highly efficient surrogate models for subduction zone temperature and earthquake dynamic rupture by leveraging data-driven, non-intrusive reduced-order models (ROMs) based in the interpolated Proper Orthogonal Decomposition (iPOD). The ROM efficiency enables sensitivity analysis and uncertainty quantification techniques that require many model evaluations. I will show examples using surrogate models to quantify important quantities such as the temperature-inferred potential extent of megathrust earthquakes in subduction zones and the vertical surface displacement during dynamic earthquake rupture given different fault geometries. Leveraging ROM techniques to perform sensitivity analysis and UQ is an exciting frontier in computational geophysics that will improve our understanding of earthquake physics and seismic hazard analysis.

The Year of AI: Raising Campus Awareness Through Art, Exhibits, and Community Engagement - Essraa Nawa, Chapman University

This poster highlights the Leatherby Libraries’ leadership in advancing AI literacy through creative, inclusive, and interdisciplinary approaches. As part of Chapman University’s “Year of AI,” the library launched initiatives such as Beyond the Lens and AI: The Next Chapter, blending art, ethics, and education to inspire campus-wide engagement. Through collaboration with IS&T, Town & Gown, and academic departments, the library positioned itself as a hub for ethical dialogue and innovation. The poster shares replicable models for how libraries can foster AI awareness through community partnerships, exhibitions, and experiential learning.

Too Smart to be Human: Can AI Agents Replace Us in Behavioral Experiments? - John Garcia, California Lutheran University

Can AI replace human subjects? Researchers are increasingly using models such as GPT-4 as surrogates for humans because they are cheaper and faster; however, do they behave like us? To find out, I built 96 AI "retail investors" and unleashed them in a stock market simulation, exposing them to viral "meme stock" buzz while holding financial fundamentals constant. The results were striking: When human retail investors see viral hype, they buy (+30–50%); my AI retail investor agents did the opposite, decreasing buying by 45%. While humans famously hold on to losing investments for too long, my agents sold losers three times faster than they sold winners. They acted exactly like financial textbooks say we should, and exactly unlike real people do. I call this "Hyper-Rationality." AI models are trained on vast amounts of advice: "avoid bubbles," "cut your losses." They prioritize logical training over character instruction; even when explicitly programmed to experience "FOMO," they calculated the transaction costs and rationally refrained from trading. The implication: AI can simulate how we should behave, but it lacks the emotional software to replicate how we actually behave.

Acknowledgements

This workshop is funded by the ACCESS program through National Science Foudnation Grants 2138286.