The NSF ACCESS Regional AI Workshop – SoCal Edition invites researchers, educators, and students from across Southern California who are using - or curious about using - AI and advanced computing in their work. Whether you’re part of the ACCESS program, exploring NAIRR resources, or simply interested in practical AI tools and workflows, this free one-day, in-person event is for you.
Expression of Interest Email: October 20th, 2025 Applications to attend Close: November 7, 2025
Acceptance Notifications Sent: December 5, 2025
This ACCESS-Support led workshop will include presentations on the use of AI for research and education and provide an overview of NAIRR-Pilot, connecting practitioners in the Southern California region using the NAIRR-Pilot ecosystem. It will explore how to make the most of NAIRR allocations, highlight practical tools and workflows, and share strategies for advancing research across disciplines with AI. Participants will gain insights into best practices, hear about success stories from the community, and connect with peers to exchange ideas and foster collaboration.
The NAIRR-Pilot program is NSF’s flagship program about giving access to commercial and academic CI resources to researchers looking to conduct research in AI or applying AI to their science or education.
This workshop offers a unique opportunity to strengthen your AI skills, broaden your network, and become part of the growing regional AI community. The workshop will provide an opportunity to present lightning talks or posters.
This is an application to attend. Space is limited to 100 participants. If there are more applications than space, applications will be selected based on the responses in the applications.
Applications are now closed
Lodging Information
SOLD OUT - USC Hotel: Link to book
Distance from USC Ginsburg Hall: .6 miles
Address: 3540 S Figueroa Street, Los Angeles, CA 90007 Google Map Directions
Hotel Figueroa : Link to book
Distance from USC Ginsburg Hall: 3 Miles
Address: 939 S Figueroa St, Los Angeles, CA 90015 Google Map Directions
Courtyard by Marriot LA Live : Link to book
Distance from USC Ginsburg Hall: 3.3 Miles
Address: 901 W Olympic Blvd, Los Angeles, CA 90015 Google Map Directions
How it Started
In April 2025, NAIRR held “AI Unlocked: Empowering Higher Education through Research and Discovery” in Denver, Colorado with about 350 attendees. Based on the success of the workshop, it was decided to hold NAIRR smaller regional focused workshops limited to about 100 attendees.The first one was RMACC (see agenda here) in Colorado in August 2025. A second workshop was hosted in Kentucky early October 2025. USC/ISI is organizing the Southern California Region workshop in January 2026.
Agenda
(Will be updated as more speakers are confirmed)
Time
Topic
8:00 - 9:00 am
Check in and breakfast
9:00 - 9:10 am
Welcome - Ewa Deelman, USC
9:10 - 10:40 am
AI on Campus
9:10 - 9:40 am
TBD - Stephen Aguilar, USC
9:40 - 10:10 am
Artificial Intelligence’s Transformative Research Methods and Techniques in the Digital Humanities, Danielle Mihram, University of Southern California
Abstract
The term “artificial intelligence” was coined in 1956 by John McCarthy, a Dartmouth College professor, at the Dartmouth Summer Research Project on Artificial Intelligence (June 18-August 17, 1956). Very early computational methods in the Digital Humanities (DH) primarily focused on text analysis using tools for concordances, lexical statistics, and stylometry. These methods and techniques were pioneered by Roberto Busa’s Project, Index Thomisticus (a concordance to 179 texts centering around Thomas Aquinas) which was begun in the 1940’s. Since then, projects can be traced back to the 1960s and 1970s and they include additional key methods and techniques such as early forms of text encoding and markup for creating scholarly editions and the analysis of language evolution through word usage and grammatical patterns. The extensive integration of Artificial Intelligence (AI) into the field of Digital Humanities (DH) began in the late 1990s and early 2000s as computational power increased, and this integration grew significantly, becoming a central part of DH by 2020 due to advancements in AI techniques like Natural Language Processing (NLP), machine learning, and image recognition, which allow for the analysis of large datasets that would be impractical for manual methods.
These advancements can be seen as a pivotal research event, signaling a transformation in the approaches to studying human culture and history, and it is reshaping the traditional ways in which we conduct research, analyze information, and share insights. AI enables researchers to analyze large amounts of data and uncover patterns and insights at speeds previously unattainable, allowing for the creation of more dynamic ways to discover and present historical and cultural content to a potentially broader audience. In this presentation we shall take a look at key techniques and methods currently used in AI-focused research in the Digital Humanities and we shall examine illustrative case studies.
Presenter
Danielle Mihram, University of Southern California
Danielle Mihram is a University Librarian (rank equivalent to Full Professor) at the University of Southern California [USC] Libraries where she has been a faculty member since 1989. Prior to USC, she was a member of the faculty of several academic institutions, including the University of Sydney (Australia), Swarthmore College, Haverford College, the University of Pennsylvania, and New York University. She holds a B.A. Honors from the University of Sydney; a Ph.D. from the University of Pennsylvania; and a Master of Library Science (MLS) from Rutgers University. Since her arrival at USC Libraries, she has held several high-level administrative positions. In 1996 she was appointed as the first full-time Director of USC’s Center for Excellence in Teaching [CET] (Provost Office; from 1996 to 2007) in view of her many years of teaching and mentoring experience, as well as her knowledge of information science. She remains a member of CET as one of its Distinguished Faculty Fellows. Danielle's research interests are multidisciplinary, and they have led to over a hundred publications and presentations. Her current research interests focus on the contributions of the digital humanities to the advancement of human knowledge and the transformative effects of artificial intelligence in research and scholarship. She was awarded several USC grants, as well as two USC Libraries’ Research Funds, the latter resulting in her leading two Digital Humanities Projects: USC Digital Voltaire (2017) and USC Illuminated Medieval Manuscripts (work in progress). She is the recipient of several awards: The Outstanding Scholarly Achievement Award (2003) and the Innovation Award on Teaching and Research (2005), both from the International Institute for Advanced Studies in Systems Research and Cybernetics (Baden-Baden, Germany); the USC Mellon Award for Excellence in Mentoring (2005); and the USC Academic Senate’s Distinguished Faculty Service Award (2008).
10:10 - 10:40 am
AI for All - Nabeel Alzahrani, California State University, San Bernardino
Abstract
AI for All is an introduction to Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Generative AI (genAI), and Large Language Models (LLMs). The session highlights real-world applications and ethical considerations, empowering both STEM and non-STEM audiences to engage thoughtfully with AI technologies. Participants will gain a foundational understanding of key AI concepts, explore how AI is transforming fields such as education and healthcare, and discuss critical issues of fairness, transparency, and bias.
Presenter
Dr. Nabeel Alzahrani, California State University, San Bernardino (CSUSB)
Dr. Alzahrani is an adjunct professor of Computer Science and Engineering at California State University, San Bernardino (CSUSB), specializing in artificial intelligence (AI), high-performance computing (HPC), and cybersecurity. He earned his Ph.D. in Computer Science from the University of California, Riverside. Dr. Alzahrani also serves as a consultant in the Identity, Security, and Enterprise Technology Department at CSUSB. He is the co-founder of the Artificial Intelligence, Quantum Computing, Fusion Energy, and Semiconductors (AQFS) Research and Training Lab at CSUSB. In addition, he is a published author of books and research papers and has delivered numerous presentations in his field.
10:40 - 11:00 am
Break
11:00 - 12:30 pm
AI Resources
11:00 - 11:30 am
Introduction to NAIRR and ACCESS - Empowering Research and Education with Advanced Computing Resources, Shelly Knuth, University of Colorado Boulder
Abstract
This talk will go over the resources available to the research community as part of the National Artificial Intelligence Research Resource (NAIRR) Pilot and the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) projects.
Presenter
Shelly Knuth, University of Colorado Boulder
Shelley is the Assistant Vice Chancellor for Research Computing at the University of Colorado Boulder. She oversees advanced computing and data services that support researchers nationwide, including supercomputing, large-scale data storage, secure enclaves, and high-speed networking. She also serves as Executive Director of the Center for Research Data and Digital Scholarship (CRDDS) and chairs the Rocky Mountain Advanced Computing Consortium (RMACC), fostering collaboration across the region.
Shelley is the lead principal investigator for the NSF-funded ACCESS Support project and contributes to several other NSF initiatives. Additionally, she helps guide national strategy as co-lead of the User Experience Working Group for the National Artificial Intelligence Research Resource (NAIRR) pilot.
She earned her PhD in Atmospheric and Oceanic Sciences from CU Boulder in 2014.
11:30 - 12:00 pm
Getting Access to NAIRR Pilot Resources, Maytal Dahan, University of Texas at Austin
Abstract
This talk guides participants through the process of accessing resources from the National AI Research Resource (NAIRR Pilot), emphasizing preparation, selection, and proposal submission. Key topics include:
Preparation for Submitting a Proposal:
Defining the project scope and running test simulations using a sandbox to identify resource needs.
Evaluating computational requirements (e.g., CPU/GPU, memory) and necessary applications based on preliminary tests.
Matching Resources:
Exploring computational resources and determining the best match for specific project needs.
Submitting an Allocation Request:
Step-by-step demo with guided, hands-on practice.
Support and Guidance:
Leveraging office hours, ticket systems (NAIRR Pilot or Resource Provider), and consultations for personalized assistance.
Presenter
Maytal Dahan, University of Texas at Austin
Maytal Dahan is the Director of Advanced Computing Interfaces (ACI) at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. She leads efforts to design and deploy cyberinfrastructure platforms and science gateways that broaden access to computing and data for a wide range of research communities. With over two decades of experience in software engineering and research computing, Maytal has been a key contributor to projects such as Tapis, SGX3, and XSEDE and more.
12:00 - 12:30 pm
AI Infrastructure for All - Frank Würthwein, San Diego Supercomputer Center
Abstract
The National Research Platform (NRP) provides a national scale AI infrastructure for education and research that enables researchers and their institutions to own their own AI infrastructure without having to operate it. It provides AI infrastructure management across more than 100 data centers today. The user interfaces NRP offers include Jupyter Notebooks, LLM chat and API access, the native Kubernetes API, the National Data Platform UI/UX, and HTCondor via NRP integration with the OSPool managed by PATh. Dozens of colleges nationwide use the platform to bring digital assets into the classroom, including data, compute, and AI tools.
We will give an overview what the NRP provides to students, educators, researchers, and institutions, including a “walk through” the training materials, and other support mechanisms for people to get started.
Presenter
Frank Würthwein, San Diego Supercomputer Center
Frank Würthwein is the Director of the San Diego Supercomputer Center. He holds faculty appointments at UC San Diego in the Physics Department and the Halıcıoğlu Data Science Institute. After receiving his Ph.D. from Cornell in 1995, he held appointments at Caltech, MIT and Fermi National Laboratory, before joining the UC San Diego faculty in 2003. His research focuses on globally distributed compute and data systems (e.g., OSG, NRP, OSDF), experimental particle physics and distributed high-throughput computing. As an experimentalist, he is interested in instrumentation and data analysis. In the last couple decades, this meant developing, deploying and operating worldwide distributed computing systems that support processing and analysis of large data volumes. In 2010, "large" data volumes were measured in Petabytes. By 2030, they are expected to grow to Exabytes.
12:30 - 1:30 pm
Lunch
1:30 - 2:30 pm
Lightning Talks: What Can You Do With AI? (10 min talks, 5 min Q/A)
LT1: 1:30 - 1:45 pm
Can AI Agents Replace Human Subjects? Testing Behavioral Theories with LLM Trading Experiments - John Garcia
Abstract
Behavioral economics has an endogeneity problem. Investors pay attention to stocks because of news. Consumers notice products because of quality signals. Voters engage with candidates because of policy positions. When the behaviors we want to study are inseparable from the information driving them, how can we isolate pure behavioral effects?
I propose a solution: LLM agents as experimental subjects in behavioral research.
In a proof-of-concept study, I created 72 AI "investors" with distinct behavioral profiles —day traders, contrarians, and passive indexers —and subjected them to randomized attention shocks across 200 trading periods while holding all information perfectly constant. This control, impossible with human subjects in natural settings, reveals that attention alone drives 6-7 percentage point increases in both buying and selling activity, costing these agents 68 basis points through excess trading.
Three methodological insights for researchers:
When to use AI agents: For preliminary mechanism identification when endogeneity prevents human experimentation.
How to validate rigorously: Systematic benchmarking against human behavior across multiple dimensions. My validation shows 73-89% alignment on financial tasks but also reveals critical failures (e.g., reversed disposition effect) that highlight clear boundaries.
The transparency imperative: Position findings as hypothesis generation requiring human validation, not as substitutes for real behavioral data.
Why this matters beyond finance: This methodology applies anywhere behavioral confounds prevent causal inference; advertising effects, media influence, negotiation dynamics, and educational interventions. I'll share practical best practices, common pitfalls I've encountered, and emerging infrastructure, making this approach accessible to researchers without massive computational resources.
The question isn't whether AI agents can replace humans—they can't. It's whether they can help us ask better questions before we invest in costly human studies. I believe they can.
LT2: 1:45 - 2:00 pm
Deepfakes, Data, and Democracy: Artificial Intelligence in Political Life - Michael Ault
Abstract
I explore how artificial intelligence is transforming politics and political communication, from government regulation and global power struggles to the future of democracy itself. I examine historical efforts to regulate disruptive technologies alongside contemporary debates over AI policy in the U.S., Europe, and China. Through case studies of recent elections, I also investigate how AI tools (i.e., from data analytics to deepfakes) are reshaping campaigns, media narratives, and voter trust. Ethical challenges such as surveillance, bias, and accountability are also analyzed alongside questions of global competition and control. Overall, I seek to critically assess who governs in the age of algorithms and what that means for justice, democracy, and political power.
LT3: 2:00 - 2:15 pm
AI Agents - Prakashan Korambath
Abstract
AI agents represent a significant evolution beyond traditional chatbots and simple question-answering systems. They aren't merely delivering static information; they are dynamic entities that can reason, act, and collaborate to solve complex problems and automate tasks, often with minimal or no human intervention. This shift is powered by their ability to leverage external tools in the form of APIs that provide access to dynamic, real-world information. By bridging knowledge gaps and generating new insights, AI agents are poised to fundamentally change how we interact with technology and automate workflows across every industry. Also, tools developed by different model providers can interact well using Model Context Protocol (MCP) with client server architecture to enhance usage of Agentic AI concepts in real time and real data.
LT4: 2:15 - 2:30 pm
AI-Driven Framework for Personalized Insulin Dosing and Safer Diabetes Management - Yash Kishorbhai Pansheriya
Abstract
Type 1 Diabetes (T1D) management demands constant monitoring and real-time decision making, yet traditional insulin dosing formulas remain static and poorly suited to unpredictable conditions such as skipped meals or variable physical activity. This research introduces a machine-learning-based framework for personalized insulin recommendations that adapts dynamically to patient-specific data.
The framework integrates continuous glucose monitoring, insulin on board, carbohydrate intake, and physical activity to predict short-term glucose levels and identify glycemic risk zones. Based on these predictions, the system generates adaptive insulin or nutrition recommendations derived from clinical principles but tailored to each user’s condition.
To improve transparency and accessibility, a Retrieval-Augmented Generation (RAG)–based large language model is integrated as an interactive chatbot interface, translating model insights into patient-specific explanations.
The talk will discuss the design, modeling workflow, and the integration of explainable AI and conversational systems to enhance reliability, interpretability, and real world usability in diabetes management.
2:30 - 3:30 pm
AI Ready Data
2:00 - 2:30 pm
Sage Grande: An AI Testbed for Edge Computing - Pete Beckman, Northwestern University
Abstract
Sage Grande is national-scale cyberinfrastructure designed for AI-driven edge computing. With more than 100 nodes deployed across diverse environments—from Chicago’s urban streets to national parks—Sage enables students and scientists to develop and deploy AI applications in the field. By integrating sensors such as cameras, microphones, and LiDAR with AI-driven computation, researchers can build novel systems for tasks like wildfire detection, agricultural monitoring, bioacoustic analysis, and understanding urban dynamics.
Presenter
Pete Beckman, Northwestern University
Pete Beckman is a recognized global expert in high-end computing systems. During the past 25 years, he has designed and built software and architectures for large-scale parallel and distributed computing systems. Peter helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, he acted as vice president of Turbolinux’s worldwide engineering efforts.
Pete joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation.
He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and co-Director of the Northwestern Argonne Institute of Science and Engineering. He is also a co-founder of the International Exascale Software Project (IESP).
2:30 - 3:00 pm
From Promise to Practice: Reimagining Resilient Agriculture Through AI - Nirav Merchant, University of Arizona
Abstract
This talk will cover how a NAIRR allocation was used to build the foundation model for InsectNet and how it is being made accessible to the Ag community and lessons learned.
Presenter
Nirav Merchant, University of Arizona
Nirav Merchant serves as the Director of the Data Science Institute. For the past three decades at the University of Arizona, his research has been focused on the development of scalable computational platforms (cyberinfrastructure) in support of open science projects. His work is primarily directed towards reducing the socio-technical barriers in adoption of emerging computational and information sciences advances by domain sciences.
His interests encompass large-scale data management platforms, data delivery technologies, cloud native methodologies, secure data analysis enclaves, and the use of managed sensors and wearables for health interventions. He is passionate about developing learning material for informed adoption and utilization of Machine Learning (ML) and Artificial Intelligence (AI) based analysis methods into course work and for workforce development.
He serves as the principal investigator for NSF CyVerse, a national scale Cyberinfrastructure and Co-principal investigator for NSF Jetstream the first user-friendly, scalable cloud environment for NSF XSEDE/ACCESS. He leads the cyberinfrastructure team for the NSF & USDA funded National Artificial Intelligence Institute for Resilient Agriculture (AIIRA)
3:00 - 3:30 pm
Generative Artificial Intelligence and Deep Learning Using NAIRR Reveal Brain Aging Trajectories Before Alzheimer's Disease - Andrei Irimia, PhD, University of Southern Californi
Abstract
Understanding why individuals age differently at the level of the brain is a central question in neuroscience and medicine. Our research leverages large-scale neuroimaging datasets and artificial intelligence to quantify the pace and pattern of brain aging from structural MRI. Using deep learning models trained on thousands of MRI scans, we estimate “brain age” as a personalized biomarker of neural health. These measures reveal that accelerated brain aging predicts a higher risk of progression from normal cognition to impairment, whereas slower brain aging confers resilience. Regional brain aging patterns, identified through interpretable AI, further distinguish those at risk for Alzheimer’s disease and related dementias. We also integrate multimodal data to examine how chronic conditions—such as cardiovascular disease, metabolic disorders, and traumatic brain injury—as well as women’s health factors like menopause and reproductive history, shape the trajectory of brain aging. This work illustrates how AI-driven neuroimaging analytics can inform individualized risk stratification, preventive strategies, and ultimately precision aging research.
Presenter
Andrei Irimia, PhD, University of Southern Californi
Andrei Irimia, PhD is an associate professor in the Leonard Davis School of Gerontology at the University of Southern California, with courtesy appointments in biomedical engineering and quantitative biology. His research focuses on brain aging, traumatic brain injury, and Alzheimer’s disease, using advanced neuroimaging and quantitative methods to understand individual variability in aging trajectories and dementia risk. Dr. Irimia leads several NIH-funded studies examining how chronic disease variables and women's health factors influence brain aging and neurodegeneration. His work bridges population neuroscience and clinical neurology, with the goal of improving early detection and stratification of patients at risk for cognitive decline.