Keston Research Award
Effective Interventions of Misinformation in Online Social Networks
DANIEL BENJAMIN AND FRED MORSTATTER
Misinformation has led to many recent harms including mistrust of institutions, disregard for public health guidelines, decreased vaccination rates, a divided public, civil unrest, among others. The current fractured media landscape allows individuals to choose confirming over credible information. Misinformation can be debiased by identifying gaps in mental representations of the world (mental models) and prompt alerts to be vigilant about assessing information (Lewandowsky et al, 2012). We strive to develop interventions to mitigate the spread of misinformation by visualizing the social networks of hot button.
Our interventions describe how personal media consumption reaches a limited compendium of the media landscape. Social sampling theory describes that our misperceptions of others is explained by the sample of people we encounter (Galesic, Olsson, and Rieskamp, 2012), and we are more likely to link to similar people online (Kossinets & Watts, 2009). People are unaware of their own biases even when they can see them in others (Pronin, Lin, & Ross, 2002). Our intervention addresses this gap by making these biases explicit. We will pair social network analysis with traditional behavioral experiments. Our online experiments will test how individuals perceive their own networks and respond to various network visualizations.
Keston Research Award
SARFire: Rapid Wildfire Detection through Synthetic Aperture Radar
ANDREW RITTENBACH AND JP WALTERS
Over the last five years, the costs due to unchecked wildfires has dramatically increased. In California alone, millions of acres have burned and the cost due to damages has increased by an order of magnitude, up to billions of dollars per year. Without radically improved detection methods, these costs are expected to continue increasing. Ground based detection solutions using cameras or other types of sensors have proven minimally effective but suffer from limited field-of-view. An alternative detection approach is to use remote sensing, where measurements are taken by a satellite. However, today’s satellite-based approaches have several limitations: 1) their imaging modalities are limited to kilometer-scale resolution and are vulnerable to near total sensor blackout due to wild fire smoke, 2) all data must be downlinked to Earth for all image processing which adds hours between data collection and fire detection, and 3) the revisit time of today’s satellites is on the order of days which limits the ability of satellites to perform early detection and warning. The SARFire project seeks to address these limitations using a novel deep learning-based onboard Synthetic Aperture Radar (SAR) imaging technique and satellite constellations to provide near-constant overhead fire surveillance. To achieve this goal, we will develop a deep learning based that performs both SAR image formation as well as wildfire detection. Furthermore, to demonstrate that our approach is suitable for onboard SAR processing, we will port it to an embedded platform representative of compute processes that will be available on state-of-the-art SAR imaging satellites. We believe that when real-time SAR imagery is used in conjunction with data collected from other remote sensing satellites, we will be able to rapidly detect, localize, and monitor wildfires with resolution on the meter scale, improving the imaging resolution used for early wildfire detection by nearly 1000 times beyond what is currently used, while also substantially reducing detection time, greatly increasing the chance for early wildfire detection and thus limiting the damages caused by it.
ISI Research Award
FairPRS: Fairly Predicting Genetic Risks for Personalized Medicine
JOSE-LUIS AMBITE, GREG VER STEEG, KEITH BURGHART, KRISTINA LERMAN AND CHRIS GIGNOUX (UNIVERSITY OF COLORADO)
Personalized medicine seeks to improve disease prevention, diagnosis, and treatment by tailoring medical care to the individual. Uncovering the genetic basis of diseases and traits promises a better understanding of biological mechanisms and the design of drugs and interventions. A Polygenic Risk Score (PRS) combines the effects of many genetic variants into a score that indicates the risk on a disease for a given individual. However, genetic effects vary with ancestry. A PRS developed for one population often has low performance in another. Our goal is to develop novel methods for predicting genetic risks that generalize across populations, and thus can be broadly and fairly applied for personalized medicine.
ISI Research Award
Identifying Populations Susceptible to Anti-Science
KEITH BURGHARDT AND GORAN MURIC
Anti-science, including anti-vaccine, attitudes are present within a large and recently active minority. Vaccine hesitancy is partly responsible for a significant resurgence of Measles and a large reason why COVID-19 is an epidemic, especially within the United States. The rapid spread of conspiracy theories and polarization online is one reason behind these attitudes. In this proposal, we aim to understand who are these anti-vaccine users on Twitter, what is the language they use, and who are they likely to influence.
We are building a model that can identify anti-vaccine sentiment and provide an anti-vaccine score for each queried account. This score corresponds to the likelihood a user will express anti-vaccine attitudes in the future if they have not expressed these attitudes before. We use tweets, and explore various features, such as who users interact with, to determine user vulnerability. The model will be published on a public code repository for use by researchers and policy makers. Our work will provide a possibility for a rapid response to the recent uptick in anti-science sentiment by identifying users vulnerable to such messages. This tool, combined with properly targeted messaging and campaigns, has the potential to significantly enhance pro-vaccine efforts in the future.
ISI Research Award
AI2AI: Discovering and Assessing Vulnerability to AI-Generated Twin Identities
MOHAMED HUSSEIN AND WAEL ABD-ALMAGEED
Modern artificial intelligence methods have the ability to create pictures that match the quality of natural images, including synthesizing photo-realistic face images of people who are believed not to exist in real life. However, what if some of these AI-generated faces are virtually "identical twins" for real individuals?! Such fake twins can, intentionally or unintentionally, cause harm. They can be used in ways their real counterparts never consented for. Meanwhile, while modern AI models have been able to match or exceed human performance in multiple tasks, a face recognition model can be fooled into identifying an image of person A as person B if specific maliciously-crafted imperceptible perturbations are applied to an image of person A. This phenomenon underscores another type of AI-generated twins, adversarial twins, which are easier to generate than fake twins, and are harmful by construction. A lot of existing research is dedicated to generating more naturally looking fake or adversarial twins. However, to our knowledge, there has been little or no prior focus on discovering and assessing the vulnerability of individuals to the threats posed by them. AI Investigating AI (AI2AI)'s objective is to shed light on and ignite research efforts to cover this gap. AI2AI's goal is to help communities and law enforcement agencies discover and assess the vulnerability of individuals to fake and adversarial twins.
ISI Research Award
Bio-PICS: Bio-optical Point of care Intelligent COVID-19 Sensor
AJEY JACOB, AKHILESH JAISWAL AND NEHA NANDA (KECK SCHOOL OF MEDICINE)
Covid 19 disease has changed the human lifestyle over the last two years. It has already claimed millions of lives and trillions of dollars in losses [United Nations report 2020] worldwide. Yet, despite the unprecedented vaccination, the variants of the virus are still spreading. Therefore, early detection is the best prevention approach to reduce the spread of the virus. Currently, the available detection methods are time-consuming, expensive, and require expert intervention. Thus, there is an unmet and urgent need motivated by health and economic concerns to develop rapid, cheap, easy to use, point-of-care (POC) Covid-19 testing.
To this end, we are developing a novel selectively-sensing bio-photonics microfluidic optical ring resonator-based integrated chip architecture with an on-chip spectrometer consisting of coupled ring resonator filters and integrated photodetector arrays. This intensity-based sensing scheme shall provide a spectral accuracy of less than five picometers and is better than the reported state-of-the-art intensity detection schemes. Thus, the integrated device facilitates quantitative detection of the ultralow concentration of virus load (picograms per milliliter) without using expensive external spectral measurements. In addition, the integrated chip architecture design also reduces fabrication-induced performance variation and thermal sensitivity. Moreover, CMOS compatibility of the components used in the sensing circuit facilitates high volume manufacturing and lower cost that promises commercial success.
ISI Research Award
3D Facial Muscle Screening Tool For Early Diagnosis Of Parkinson Disease
HENGAMEH MIRZAALIAN AND WAEL ABD-ALMAGEED
Parkinson disease (PD) is one of the most common neurodegenerative movement disorders. It has been reported that approximately 1.2 million people in the United States will be affected by PD by the year 2030. PD causes slight shaking or tremor in fingers, slow handwriting, trouble on walking, and losing facial expression abilities. Early diagnosis of PD might be challenging in the presence of subtle movement alterations, whereas accurate and early diagnosis of it is crucial so that patients can receive proper treatment and advice. It has been shown that early PD detection might delay or even stop spreading the neurodegenerative process to other central nervous system regions. To the best of our knowledge, existing diagnostic tools to screen facial expression of PD patients rely on a limited number of 2D facial landmarks (up to 64). Since more information can be derived from 3D images, our goal in the proposed effort is to develop a facial expression screening tool over a fine-grained 3D mesh of the face. We compute a 3D mesh per frame of the captured video. Then, the series of the reconstructed 3D meshes are analyzed and quantified to study and evaluate spasticity and rigidity of facial muscles of PD patients.
ISI Research Award
Learning Fair AI Models Across Distinct Domains
MOHAMMAD ROSTAMI AND ARAM GALSTYAN
As societies are becoming increasingly reliant on AI for automatic decision-making across a wide range of applications, concerns about bias and fairness in AI are growing in parallel. Fairness in AI is not merely an ethical issue because bias undermines efficiency and productivity in the labor market. It has been demonstrated that “at least a quarter of the growth in the U.S. GDP between 1960 and 2010 is the result of greater gender and racial balance in the workplace”. A common approach for studying fairness is to investigate whether model decisions are related to sensitive attributes, such as gender or race. Since this is a relatively new research area, current works simply focus on debiasing AI models for a single domain. However, an initially fair model trained for one domain may be used in many other domains during execution. This means that even if we can train a fair model in a source domain, there is no guarantee that it will generalize fairly to target domains or when drifts in the input distribution occur during testing. We are addressing this challenge within a domain adaptation formulation. Our goal is to adapt a pretrained fair model to generalize well in a target domain fairly using solely unlabeled target domain datasets. Instead of starting training again with a new unbiased dataset, we are trying to use the knowledge that is gained during the original debiasing to preserve the fairness of the model in the new domain using unannotated target domain data. Our goal is to test the effectiveness of the algorithm that we are developing through testing on real-world benchmark datasets.
Keston Research Award
Fighting Misinformation: An Internet System for Detecting Fake Face Videos
WAEL ABDALMAGEED AND IACOPO MASI
The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a web service offering a new way for assessing if a face video has been manipulated. The system employs an AI-based engine for deefake detection following our current research direction on video-based face manipulation detection [A, B]. The research direction paves the way for achieving scalable, person-agnostic deepfake detection in the wild.
The Deefake Detection Web Service allows the user to upload a short video. The video will be processed in background through our deefake detection engine. The user will be then notified, being able to review the detection output superimposed over the original video. The Deefake Detection Web Service keeps track of a history of previously processed videos so that they can be easily inspected if need be. It also offers a user management system allowing each user to privately inspect her/his own videos.
Technical video presentation: https://www.youtube.com/watch?v=X3N8QjV15d8&feature=youtu.be
Quick Demo: https://www.youtube.com/watch?v=RspKj9DtM9U
[A] Ekraam Sabir, Jiaxin Cheng, Ayush Jaiswal, Wael AbdAlmageed, Iacopo Masi, Prem Natarajan, "Recurrent Convolutional Strategies for Face Manipulation Detection in Videos", CVPR 2019 Workshop on Media Forensics
[B] Iacopo Masi, Aditya Killekar, Royston Marian Mascarenhas, Shenoy Pratik Gurudatt, Wael AbdAlmgaeed, "Two-branch Recurrent Network for Isolating Deepfakes in Videos", ECCV 2020
ISI Research Award
Automating Programmability of Hybrid Digital-Analog Hardware for Stochastic Cell Simulation in Biological Systems
ANDREW RITTENBACH, PRIYATAM CHILIKI, DEV SHENOY
Biological system modeling has become increasingly important over the past decade as it enables biologists to both quantitatively explain observations made in a laboratory environment and make predictions about how biological processes respond, given certain input. An example model consists of a set of biochemical reactions that feed into and interact with each other over time. One of the ‘holy grails’ of biological system modeling is a fully functional model of the entire human cell. Such a model could potentially enable development of personalized medicine, for various treatments, using an individual’s DNA as input to the cell model. However, one of the current bottlenecks in achieving this goal is development of a platform that is capable of modeling all of the individual processes that go on within a cell, simultaneously. Today, biological systems are modeled in software. Although this is acceptable for small-scale models of individual biological pathways, it is not viable for large-scale gene-protein networks. To this end, the ISI team investigated the viability of an alternative approach: a Cytomorphic computing platform pioneered by Prof. Rahul Sarpeshkar, which consists of programmable hybrid analog/digital circuitry designed specifically to model biochemical reactions. One challenge introduced with this approach, however, is determining how to configure the analog circuit parameters, such as current source amperage, to accurately model the reactions. In this project, the ISI team developed a reinforcement learning-based approach that was used to automatically configure a Simulink based model of a Cytomorphic circuit. Results showed, that after configuration, a biochemical reaction simulated using the Cytomorphic circuit-based simulation data matched closely with simulation data generated by COPASI, a standard software used to simulate biochemical processes.
ISI Research Award
Translators for Asylum Seekers at the Border
We are building domain-focused universal language translator tools to enable asylum applicants on the southern border of the US to communicate with immigration lawyers in order to prepare for credible fear interviews that can mean the difference between life and death. Preparation with a lawyer increases the chances of a successful asylum application from 13% to 74%. Unfortunately, many applicants speak languages such as Luganda, Mixtec, Mam, or Kanjobal, that are not available on commercial services like Google Translate, due to the extremely low data resources available for training models. Additionally, commercial translation models are not well-suited for translating credible fear narratives. We will use our expertise in low-resource translation to boost data resources with backtranslation, novel sentence generation, and related language transfer, and will leverage the USC Shoah Foundation's collection of genocide survivor testimony to adapt our models to handle this chilling domain.
ISI Research Award
Advancing No-Resource Languages
JOEL MATHEW AND ULF HERMJAKOB
Among the world's languages, about 100 are rich in written resources (e.g. English and Hindi), with large corpora of digitized text, translations from and to other languages, and dictionaries. Some 1000 additional languages are considered low-resource (e.g. Uyghur and Odia), whereas the remaining 6000 languages (e.g., Gaddi and Reli) have no or hardly any written resources. In this project, ISI colleagues Dr. Ulf Hermjakob and Joel Mathew will build a library of computational linguistic tools to support building dictionaries, translations, and building a substantial initial text corpus for no-resource languages. Such resources are critical for developing literacy, translating existing texts such as the Bible, encouraging the creation of original content, as well as language documentation and preservation. From a computer science perspective, the Bible with its currently 698 full translations is a massively parallel corpus that will greatly facilitate useful new tools for no-resource languages, such as automatically (1) identifying likely spelling variations, (2) identifying multi-word expressions, (3) identifying ambiguous words and clustering instances of these words in the source language, (4) identifying names to be transliterated, and (5) morphological processing of inflectionally related words, with automatic translation of such related words.
An Artificial Intelligence-Based Mobile Screening Tool: Fetal Programming in Congenital Adrenal Hyperplasia
WAEL ABD-ALMAGEED AND MIMI KIM
Artificial intelligence methods will be used to investigate facial morphology in children with fetal programming due to prenatal hormone exposure. Studies will target children with congenital adrenal hyperplasia (CAH) as a natural human model for excess prenatal testosterone. Classical CAH is caused by a 21-hydroxylase deficiency, affecting 1 in 15,000 with fetal hyperandrogenism due to overproduction of adrenal androgens from week 7 of fetal life. This prenatal hormone exposure represents a significant change to the intrauterine environment during early human development that can adversely program the CAH fetus for postnatal disease. A prototype mobile imagining platform will be designed and built that enables the collection of large-scale facial images in children’s clinics, without relying on expensive 3D imaging systems. Further, an artificial intelligence-based 2D-to-3D facial processing pipeline will acquire images of the face in healthy controls and CAH youth and compare facial dysmorphism scoring between CAH patients and those that are unaffected.
Satbotics Control: How to Merge Biologically Inspired Spacecraft Together
This project will provide support to multiple graduate students to develop a new computational architecture that can enable independent satellites or spacecraft to physically and virtually “aggregate” on orbit. This is a completely new methodology and translates from monolithic to cellular in how space systems are created in the future. The computational architecture is intended to allow seamless merging of sensors/actuators/payloads as “resources” that can then be shared autonomously with all other “cells” to enable greater overall performance and capability on orbit than a single large platform can provide. The basics of this new architecture will be demonstrated on an internal 3-DOF air-bearing testbed, using independent floatbots that simulate independent spacecraft.
A Betavoltaic-Powered Transmitter for Continuous Glucose Monitors
The Glutex project aims to develop a long-lived, low-maintenance continuous glucose monitor (CGM) for diabetes patients. A CGM is a wearable device that reports blood glucose levels to the patient. Existing CGMs require the patient to recharge batteries every few days and replace the device semiannually. Glutex eliminates this maintenance by replacing the battery with a betavoltaic energy harvester that lasts up to a decade. Glutex pioneers a circuit that accumulates the small trickle of energy from the harvester and releases it in bursts to power the sensor. A successful prototype opens the path to applying betavoltaic power sources in wearable and implantable medical devices.
FLEX SYNapses for Smart Wearable Electronics and Skin-Attachable Biosensing Devices
IVAN SANCHEZ ESQUEDA
Synaptic transistors on flexible and stretchable substrates can enable the implementation of artificial neural networks and learning algorithms when attached to skin sensors for in situ processing and classification of biological signals collected from wearable devices. They can also enable us to mimic the functions of sensory nerves and construct bioelectronic reflex arcs to actuate electro-mechanical devices. This technology has applications for electrophysiology and medical diagnosis, fitness and activity tracking devices, prosthetics, robotics, etc.
Discovery and Dismantling of Human Trafficking Networks
MAYANK KEJRIWAL AND PEDRO SZEKELY
Human trafficking is a form of modern-day slavery with a significant footprint—even here in the United States. Computational tools and methods, including network analysis and machine learning, can help in data-driven mapping of networks of illicit sex providers, many of whom might be victims of trafficking that is attributable to illicit advertisements posted over the Internet. Researchers are currently working to discover and dismantle such networks, especially for possible underage victims. This effort involves a collaboration with both law enforcement and independent consultations with domain experts in the social sciences.
Understanding Internet Outages
In past years our research has led to sophisticated tools for detecting Internet Outages - transient failures of the Internet caused by natural or man-made events. Our goal for this Keston effort was to present information about these outages in a natural and approachable way, meaningful to first responders and the general public.
The result of our work was the creation of a new website at https://outage.ant.isi.edu/ that supports viewing the Internet outage data that we collect. Our website makes exploring outage data more accessible to researchers and the public by interpreting terabytes of collected Internet outage data, and making the interpreted information visible on a world map.
Our website supports browsing more than two years of outage data, organized by geography and time. The map is a google-maps-style world map, with circles on it at even intervals (every 0.5 to 2 degrees of latitude and longitude, depending on the zoom level). Circle sizes show how many /24 network blocks are out in that location; circle colors show the percentage of outages, from blue (only a few percent) to red (approaching 100%). The raw data underlying this website is available on request at https://ant.isi.edu/datasets/outage/index.html).
Our Internet outages website was developed by a collaborative team of ISI and USC researchers. In addition to the Keston grant, this research has also been funded by government agencies such as the Department of Homeland Security.
The PipeFish Project
The goal of this research is to develop and test an inexpensive autonomous robot that uses sensors to collect data in underground water pipes—thereby enabling us to assess conditions and detect problems. The project is collaborating with the Los Angeles Department of Water and Power (LADWP).
Far beneath the streets and sidewalks of Los Angeles lies a rarely seen subterranean world—a labyrinthine network of underground pipes extending 7,200 miles across the city, carrying drinking water to more than four million people every day. Like many cities in the U. S., Los Angeles is facing a looming crisis over its aging water infrastructure, and fixing it will be a monumental and expensive task.
At least two-thirds of the city’s underground water pipes are more than 60 years old, and most will reach the end of their useful lives within 15 years. This can cause a host of problems—from burst pipes and loss of water service to road closures, sinkholes, and even potential water contamination.
The PipeFish robot is intended to greatly simplify and facilitate the repair and upgrading of this infrastructure. A PipeFish enters the water system through existing fire hydrants and follows the path of the pipeline to its final destination, where it is “caught” by a net. The footage captured is then uploaded and analyzed, providing operators with a 360-degree virtual reality interior view, without ever setting foot inside the pipe. Signs of damage inside the pipe, such as cracks, high corrosion or rust are indicators that can help authorities prioritize repairs, without expensive and disruptive excavation or water service interruptions.
PipeFish features a 360-degree camera, lights, sensors, and navigation technology, controlled by an onboard computer. It measures 20 inches to 30 inches in length and 3 inches to 6 inches in diameter. Constructed of plastic and other rigid materials, PipeFish is designed to be fully autonomous and untethered, so it can freely explore the complex network of underground water pipes. PipeFish is also a modularized robot; i.e., multiple PipeFish robots can form a chain to handle the complex twists and turns in an underground water network.
In 2017, Shen and his team conducted “dry tests” in pipes at the Los Angeles Department of Water and Power’s Sylmar West Facility in the San Fernando Valley. In 2018, the team plans to add water to the system and test the robot in different pipes of various diameters under the streets of Los Angeles.
PipeFish could be equipped with additional sensors to collect more information—including water flow rate, gas space, illegally dumped chemicals and flammable materials. Ultimately, our team hopes that a “school” of PipeFish robots can be programmed to quickly and inexpensively traverse specific paths.