Ewa Deelman, Ph.D.

Current projects:

Pegasus WMS-- developing workflow management technologies in support of science (PI), funded by NSF (OCi-1148515)
Open Science Grid-- providing a production facility for high-throughput science (Sr. Personnel), funded by NSF (OCI-1148698) 
ADAMANT- Adaptive Data-Aware Multi-Domain Application Network Topologies (PI), funded by NSF (OCI-1246057)
dV/dt Accelerating the Rate of Progress towards Extreme Scale Collaborative Science (co-PI), funded by DOE under the Scientific Collaborations at Extreme-Scale program.
rSeq-- providing robust RNA Seq tools for the bioinformatics community, funded by NIH under the iSeqtools program
STAMPEDE-- monitoring and troubleshooting large-scale applications on distribute systems (PI), funded by NSF (OCI-0943705)
CorralWMS-- provisioning resources in distributed environments (PI), funded by NSF (OCI-0943725)
FutureGrid--developing a Grid testbed for computer science research and domain science applications, (Sr. Pesonnel), funded by NSF (OCI-0910812)
Center for Collaborative Genetic Studies of Mental Disorders, (co-PI), funded by NIH
Population Architecture using Genomics and Epidemiology (PAGE), (co-PI), funded by NIH
NSF SI2 PI meeting, January 17-18, 2013, funded by NSF (OCI-1256100)
EarthCube Community Workshop: Designing A Roadmap for Workflows in Geosciences (co-PI), funded by NSF 1238216

In 2010 I co-organized a PI meeting for the OCI SDCI and STCI Programs, funded by NSF (OCI-1012131)
In 2006 co-chaired the NSF-funded Workshop on Challenges of Scientific Workflows, http://vtcpc.isi.edu/wiki

Media citations
A Tale of 160 Scientists, Three Applications, One Workshop and A Cloud, November 7 2012, http://www.isgtw.org/feed-item/tale-160-scientists-three-applications-one-workshop-and-cloud 
Funding boost for US grid computing, June 27, 2012,  http://www.isgtw.org/spotlight/funding-boost-us-grid-computing 
$27 million award bolsters research computing grid, June 20, 2012, http://www.fnal.gov/pub/presspass/press_releases/2012/million-dollar-award-20120620.html 
The truth about clouds for science, June 6, 2012, http://www.isgtw.org/spotlight/truth-about-clouds-science 
NHGRI awards funding to develop tools for genome sequence analyses, May 31, 2012, http://www.genome.gov/27547564 
Grants to USC Faculty Top $100 Million, USC News, November 11, 2009, http://uscnews.usc.edu/university/grants_to_usc_faculty_top_100_million.html 
ISI Researchers Will Support New Brain Gene Expression Project, ISI News, November 2, 2009   http://www3.isi.edu/about-news_story.htm?s=220 
USC Neuroscientists to Map Gene Expression, USC News, October 5, 2009 http://uscnews.usc.edu/science_technology/usc_neuroscientists_to_map_gene_expression.html 
Viterbi School's ISI Part of FutureGrid Test Bed,  USC Viterbi News, September 2009 http://viterbi.usc.edu/news/news/2009/viterbi-school-s213206.htm 
ISI Part of FutureGrid Test Bed, ISI News, September 11, 2009 http://www3.isi.edu/about-news_story.htm?s=216 
Women of the Open Science Grid (video 1), May 25, 2009 http://www.isgtw.org/visualization/video-week-women-open-science-grid 
USC to head Global Genetic Effort, December 2, 2008 http://www.usc.edu/uscnews/stories/16018.html 
Image of the week - Earth-quaking science in Hollywood, international Science Grid this Week, iSGTW online, January 2008, http://www.isgtw.org/?pid=1000848 
Feature - Montage a rising star in grid-enabled sky mosaics, international Science Grid this Week, iSGTW online, December 2007, http://www.isgtw.org/?pid=1000731 
Feature - Pegasus invites new communities to saddle up, international Science Grid this Week, iSGTW online, September 2007, http://www.isgtw.org/?pid=1000664 
ISI Leads $13.8 Million E-Science Effort to Tame Terabyte Torrents, ISI News, http://www.isi.edu/news/news.php?story=165 
Rensselaer Supercomputers Battle Lyme Disease, Rensselaer Polytechnic Institute Review Vol. 17 No. 18, June 14, 1996

Past projects:
SCEC PetaSHA3 Project (Sr. Personnel), funded by NSF
GPC--Genomic Psychiatry Cohort (Sr. Personnel), funded by NIH (info)
Designing Scientific Software One Workflow at a Time (PI), funded by NSF (CCF-0725332)
Southern California Earthquake Center (Sr. Personnel), funded by NSF
OOI Cyberinfrastructure design, funded by NSF Pegasus Workflow Management system (PI), funded by NSF (OCI-0722019) 
Brain Atlas, developing a reference atlas of the developmental brain, (co-PI) funded by NIH
Intelligent Data Placement in Support of Scientific Workflows, (co-PI), funded by NSF (IIS-0905032)
Ocean Modeling (PI) , funded by JPL
Intelligent Optimization of Parallel and Distributed Applications (co-PI), funded by NSF
Windward  (co-PI), funded by AFRL
Cyberinfrastructure in Support of Research: A New Imperative (NCSA)  (Sr. Personnel)
Pegasus and LIGO (PI)
Towards Cognitive Grids: Knowledge-Rich Grid Services for Autonomous Workflow Refinement and Robust Execution
National Virtual Observatory (NVO)
international Virtual Data Grid Observatory (iVDGL)
Grid Physics Network (GriPhyN)
CRCNS:  Assembling Visible Neurons for Simulation
Projects I worked on at UCLA

I used to be part of the DataOne project (OCI-0830944)
Pegasus and LIGO
National Science Foundation
February 2007-September 2007
Ewa Deelman, PI
Supporting Laser Gravitational-Wave Observatory applications on the Open Science Grid. More about this effort can be found at http://pegasus.isi.edu and http://pegasus.isi.edu/applications.php
Air Force Research Laboratory (AFRL). 
Grant number FA8750-06-C-0210. September 2006 - September 2010. 
Yolanda Gil (PI). ISI co-PIs: Paul Cohen and Ewa Deelman.
Distributed workflows are emerging as a key technology to conduct large-scale and large-scope scientific applications in earthquake science, physics, astronomy, and many other sciences.  In this new project, we will investigate the use of workflow technologies for Artificial Intelligence applications with a particular focus on data analysis and knowledge discovery tasks.  

Based on the data to be analyzed, an initial workflow template is formed by selecting from a library of  known-to-work compositions of general-purpose machine learning algorithms.  The workflow template is specialized through knowledge-based selection and configuration of algorithms.  Finally, the workflow is mapped to available resources and restructured to improve execution time.  Data analysis and knowledge discovery applications will benefit from the automation, scale, and distributed data and resource integration supported by distributed workflow systems.  

We will also conduct new research in important aspects of workflow systems.  To what extent can we represent complex algorithms and their subtle differences so that they can be automatically selected and configured to satisfy the stated application requirements?  Can we develop learning techniques that improve the performance of the workflow system by exploiting an episodic memory of prior workflow executions?  What mechanisms will be needed to support autonomous and robust execution of concurrent workflows over continuously changing data?
Intelligent Optimization of Parallel and Distributed Applications
National Science Foundation (NSF). 
Grant number CSR-0615412. August 2006 ­ September 2009. 
Principal Investigators: Mary Hall (PI), Kristina Lerman (co-PI), Ewa Deelman (co-PI), Aichiro Nakano (co-PI), Joel Saltz (co-PI). ISI co-PIs: Yolanda Gil. 

This project will develop a domain-specific programming system supporting Petascale application optimization of molecular dynamics simulation, in which applications will be viewed as workflows consisting of composable components to be mapped to a diversity of machine resources. The application components will be viewed as dynamically adaptive algorithms for which there exist a set of variants and parameters that can be chosen to develop an optimized implementation. A variant describes a distinct implementation of a code segment, perhaps even a different algorithm. A paramater is an unbound variable that affects application performance. By encoding an application in this way, we can capture a large set of possible application mappings with a very compact representation. Because the space of mappings is prohibitively large, the system captures and utilizes domain knowledge from the domain scientists and designers of the compiler, run-time and performance models to prune most of the possible implementation. Knowledge representation and machine learning techniques utilize this domain knowledge and past experience to navigate the search space efficiently. Incorporating cognitive search techniques and taking advantage of parallel resources, these alternative implementations are searched automatically by tools to find a high-quality implementation. 
Cyberinfrastructure in Support of Research: A New Imperative (NCSA)
National Science Foundation
Grant Number OCI-0438712, September 2006-June 2008
As part of this work, I extended and improved metadata services, in particular the Metadata Catalog Service (MCS).
Southern California Earthquake Center 
Community Modeling Environment
This project focuses in part on bringing workflow technologies to earthquake science applications. One of the most important applications is CyberShake. More information about CyberShake can be found at http://pegasus.isi.edu/applications.php and relevant papers can be found at: www.isi.edu/~deelman/papers.htm
NSF Workshop on Challenges of Scientific Workflows
National Science Foundation (NSF).  
Grant number IIS-0629361. May 2006 - October 2007. 
Yolanda Gil (PI) and Ewa Deelman (co-PI). 

In recent years, workflows have emerged as a paradigm for conducting large-scale scientific analyses. The structure of a workflow specifies what analysis routines need to be executed, the data flow amongst them, and relevant execution details. Workflows provide a systematic way to capture scientific methodology and provide provenance information for their results. Robust and flexible workflow creation, mapping, and execution are largely open research problems. Under this project, Ewa Deelman and Yolanda Gil chaired an invitation-only workshop on "Challenges of Scientific Workflows" at the National Science Foundation. The aim of this workshop was to bring together IT researchers and practitioners working on a variety of aspects of workflow management as well as domain scientists that use workflows for day-to-day data analysis and simulation. The National Science Foundation expects a final report with recommendations to the community regarding the challenges of scientific workflows and their role in cyber infrastructure planning for 21st century science and engineering research and education.
Towards Cognitive Grids : Knowledge-Rich Grid Services for Autonomous Workflow Refinement and Robust Execution. 
National Science Foundation (NSF) Shared Cyberinfrastructure program. 
Grant number SCI-0455361. December 2004 - November 2006. 
Ewa Deelman (PI), Yolanda Gil (co-PI).
This research combines Artificial Intelligence and Distributed Computing techniques to create knowledge-rich workflow services that can support the execution of large-scale scientific workflows. The main foundation will be provided by expressive formal representations of the application workflow and of the execution environment. These representations will support resource selection that will enhance application performance, resource reservation based on anticipated workflow needs, workflow repair capabilities in case of failures or in case of new resources coming on line.
As part of this work, I worked with the Montage scientists to fully parallelize the application using the workflow paradigm and today Pegasus is used to generate science-grade mosaics of the sky. This work resulted in publication in both computer science and astronomy venues. Publications can be found at www.isi.edu/~deelman/papers.htm
In GriPhyN I lead the effort to develop the methodology and tools to support the “virtual data” paradigm, where the scientist would ask for a particular data product and obtain the desired results. If the data was not yet instantiated, it would produce the data on the fly. The Pegasus software was developed as part of this effort and used by LIGO for gravitational-wave analyses. 
" The GriPhyN (Grid Physics Network) collaboration is a team of experimental physicists and information technology (IT) researchers who plan to implement the first Petabyte-scale computational environments for data intensive science in the 21st century. Driving the project are unprecedented requirements for geographically dispersed extraction of complex scientific information from very large collections of measured data. To meet these requirements, which arise initially from the four physics experiments involved in this project but will also be fundamental to science and commerce in the 21st century, GriPhyN will deploy computational environments called Petascale Virtual Data Grids (PVDGs) that meet the data-intensive computational needs of a diverse community of thousands of scientists spread across the globe.
Our team is composed of IT research groups and members of four NSF-funded frontier physics experiments. Our integrated research effort provides the coordination and tight feedback from prototypes and tests that will enable both communities to meet their goals. The four physics experiments are about to enter a new era of exploration of the fundamental forces of nature and the structure of the universe. The CMS and ATLAS experiments at the Large Hadron Collider (LHC) at CERN will search for the origins of mass and probe matter at the smallest length scales; LIGO (Laser Interferometer Gravitational-wave Observatory) will detect the gravitational waves of pulsars, supernovae and in-spiraling binary stars; and SDSS (Sloan Digital Sky Survey) will carry out an automated sky survey enabling systematic studies of stars, galaxies, nebula, and large-scale structure.
The data analysis for these experiments presents enormous IT challenges. Communities of thousands of scientists, distributed globally and served by networks of varying bandwidths, need to extract small signals from enormous backgrounds via computationally demanding analyses of datasets that will grow from the 100 Terabyte to the 100 Petabyte scale over the next decade. The computing and storage resources required will be distributed, for both technical and strategic reasons, across national centers, regional centers, university computing centers, and individual desktops. The scale of this task, far outpaces our current ability to manage and process data in a distributed environment. The GriPhyN collaboration proposes to carry out the necessary computer science and validate the concepts through a series of staged deployments, ultimately resulting in a set of production Data Grids. "
"The iVDGL is a global Data Grid that will serve forefront experiments in physics and astronomy. Its computing, storage and networking resources in the U.S., Europe, Asia and South America provide a unique laboratory that will test and validate Grid technologies at international and global scales. Sites in Europe and the U.S. will be linked by a multi-gigabit per second transatlantic link "
As part of iVDGL Pegasus was then transitioned into other NSF-funded projects and is regularly released and supported as part of iVDGL’s Virtual Data Toolkit (VDT).
CRCNS:  Assembling Visible Neurons for Simulation 
NIH-funded, grant number 5 R01 NS046068-03
This project continues to advance efforts to enhance and accelerate the process of very large-field 3D laser-scanning light microscopy to increase the throughput of generating multi-resolution “visible” cells for computational neuroscience. For this workflow process, key data, computation, time, and labor intensive tasks are being identified and solutions for increasing end-to-end performance, accelerating computation, enhancing visualization, and interfacing federated databases are being explored.
The Globus project is developing fundamental technologies needed to build computational grids. Grids are persistent environments that enable software applications to integrate instruments, displays, computational and information resources that are managed by diverse organizations in widespread locations.
NSF Next Generations Systems Project
This project will extend and adapt POEMS technology from model-based analysis of adaptive parallel and distributed systems to model-based control of adaptive parallel and distributed systems. This one-year project will involve a manually executed exploration and feasibility demonstration of model-based control of a distributed/parallel implementation of an adaptive application. This feasibility demonstration will bring to the surface further research problems and technology requirements for model-based control. It will explore the conditions under which basing control on more comprehensive system models will have benefit. Model-based control can address issues of variability of resource availability as well as issues of variability in resource requirements engendered by adaptive algorithms. Variability of resource availability as well as variability of resource demand will be studied in this project.
POEMS: Performance Modeling of Parallel Systems
The POEMS project will create and demonstrate a capability for prediction of the end-to-end performance of parallel/distributed implementations of large scale adaptive applications. POEMS modeling capability will span applications, operating systems including parallel I/O, and architecture. Effort will focus on the areas where there is little convention wisdom such execution behaviors of adaptive algorithms on multi-level memory hierarchies and parallel I/O operations.
SESAME: System Software Measurement and Evaluation
The goal of this research is to identify major performance bottlenecks in supercomputer system software. This will be achieved by a coordinated measurement, modeling, simulation, architectural evaluation, and experimental alteration effort, taking a global view of the overall systems software environment.
Parsec: Parallel Simulation Language
Parsec is a C-based simulation language, developed by the Parallel Computing Laboratory at UCLA, for sequential and parallel execution of discrete-event simulation models. It can also be used as a parallel programming language.http://pegasus.isi.eduhttp://www.nsf.gov/awardsearch/showAward?AWD_ID=1148515https://opensciencegrid.org/bin/viewhttp://www.nsf.gov/awardsearch/showAward?AWD_ID=1148698&HistoricalAwards=falsehttps://code.renci.org/gf/project/adamanthttp://www.nsf.gov/awardsearch/showAward?AWD_ID=1246057https://sites.google.com/site/acceleratingexascale/http://science.energy.gov/ascr/research/next-generation-networking/extremescalecollaboration/http://genomics.isi.edu/rnaseqhttps://iseqtools.org/http://confluence.pegasus.isi.edu/display/stampede/Home;jsessionid=C20F1C6980C339D8285B2C95E863E0D2http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0943705http://confluence.pegasus.isi.edu/display/provisioning/Homehttp://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0943725http://futuregrid.org/http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0910812http://www.nimhgenetics.org/https://www.pagestudy.org/https://sites.google.com/site/si2pimeeting/http://www.nsf.gov/awardsearch/showAward?AWD_ID=1256100https://sites.google.com/site/earthcubeworkflow/http://www.nsf.gov/awardsearch/showAward?AWD_ID=1238216&HistoricalAwards=falsehttp://www.nsf.gov/awardsearch/showAward?AWD_ID=1238216&HistoricalAwards=falsehttps://confluence.pegasus.isi.edu/display/ociworkshop/Homehttp://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1012131http://vtcpc.isi.edu/wikihttp://www.isgtw.org/feed-item/tale-160-scientists-three-applications-one-workshop-and-cloudhttp://www.isgtw.org/feed-item/tale-160-scientists-three-applications-one-workshop-and-cloudhttp://www.isgtw.org/spotlight/funding-boost-us-grid-computinghttp://www.fnal.gov/pub/presspass/press_releases/2012/million-dollar-award-20120620.htmlhttp://www.fnal.gov/pub/presspass/press_releases/2012/million-dollar-award-20120620.htmlhttp://www.isgtw.org/spotlight/truth-about-clouds-sciencehttp://www.genome.gov/27547564http://www.genome.gov/27547564http://uscnews.usc.edu/university/grants_to_usc_faculty_top_100_million.htmlhttp://uscnews.usc.edu/university/grants_to_usc_faculty_top_100_million.htmlhttp://www3.isi.edu/about-news_story.htm?s=220http://www3.isi.edu/about-news_story.htm?s=220http://uscnews.usc.edu/science_technology/usc_neuroscientists_to_map_gene_expression.htmlhttp://uscnews.usc.edu/science_technology/usc_neuroscientists_to_map_gene_expression.htmlhttp://viterbi.usc.edu/news/news/2009/viterbi-school-s213206.htmhttp://viterbi.usc.edu/news/news/2009/viterbi-school-s213206.htmhttp://www3.isi.edu/about-news_story.htm?s=216http://www.isgtw.org/visualization/video-week-women-open-science-gridhttp://www.isgtw.org/visualization/video-week-women-open-science-gridhttp://www.usc.edu/uscnews/stories/16018.htmlhttp://www.isgtw.org/?pid=1000848http://www.isgtw.org/?pid=1000731http://www.isgtw.org/?pid=1000664http://www.isi.edu/news/news.php?story=165http://www.isi.edu/news/news.php?story=165http://scec.usc.edu/scecpedia/PetaSHA3_Projecthttp://search.engrant.com/project/OwJHT0/genomic_psychiatry_cohorthttp://www.isi.edu/~deelman/sciflow.htmhttp://www.isi.edu/awardsearch/showAward.do?AwardNumber=0725332http://ci.oceanobservatories.org/http://pegasus.isi.edu/wms/http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0722019http://www.brainspan.org/http://www.nsf.gov/awardsearch/showAward?AWD_ID=0905032&HistoricalAwards=falsehttps://dataone.org/http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0830944http://www.nsf.gov/http://pegasus.isi.edu/http://pegasus.isi.edu/applications.phphttp://www.afrl.af.mil/http://www.nsf.gov/http://www.nsf.gov/http://mcs.isi.edu/http://epicenter.usc.edu/cmeportalhttp://pegasus.isi.edu/applications.phphttp://www.isi.edu/%7Edeelman/papers.htmhttp://www.isi.edu/nsf-workflows06http://www.nsf.gov/http://www.isi.edu/nsf-workflows06http://www.isi.edu/cognitive-gridshttp://www.isi.edu/cogrids.htmhttp://www.isi.edu/cogrids.htmhttp://www.nsf.gov/http://www.us-vo.org/http://montage.ipac.caltech.edu/http://www.isi.edu/~deelman/old_stuff/pegasus.isi.eduhttp://www.isi.edu/%7Edeelman/papers.htmhttp://www.griphyn.org/http://www.isi.edu/~deelman/old_stuff/pegasus.isi.eduhttp://www.ligo.caltech.edu/http://www.ivdgl.org/http://vdt.cs.wisc.edu/http://www.isi.edu/~deelman/old_stuff/www.globus.orghttp://www.grilab.org/http://pcl.cs.ucla.edu/projects/poems/http://may.cs.ucla.edu/projects/sesame/http://may.cs.ucla.edu/projects/parsec/shapeimage_1_link_0shapeimage_1_link_1shapeimage_1_link_2shapeimage_1_link_3shapeimage_1_link_4shapeimage_1_link_5shapeimage_1_link_6shapeimage_1_link_7shapeimage_1_link_8shapeimage_1_link_9shapeimage_1_link_10shapeimage_1_link_11shapeimage_1_link_12shapeimage_1_link_13shapeimage_1_link_14shapeimage_1_link_15shapeimage_1_link_16shapeimage_1_link_17shapeimage_1_link_18shapeimage_1_link_19shapeimage_1_link_20shapeimage_1_link_21shapeimage_1_link_22shapeimage_1_link_23shapeimage_1_link_24shapeimage_1_link_25shapeimage_1_link_26shapeimage_1_link_27shapeimage_1_link_28shapeimage_1_link_29shapeimage_1_link_30shapeimage_1_link_31shapeimage_1_link_32shapeimage_1_link_33shapeimage_1_link_34shapeimage_1_link_35shapeimage_1_link_36shapeimage_1_link_37shapeimage_1_link_38shapeimage_1_link_39shapeimage_1_link_40shapeimage_1_link_41shapeimage_1_link_42shapeimage_1_link_43shapeimage_1_link_44shapeimage_1_link_45shapeimage_1_link_46shapeimage_1_link_47shapeimage_1_link_48shapeimage_1_link_49shapeimage_1_link_50shapeimage_1_link_51shapeimage_1_link_52shapeimage_1_link_53shapeimage_1_link_54shapeimage_1_link_55shapeimage_1_link_56shapeimage_1_link_57shapeimage_1_link_58shapeimage_1_link_59shapeimage_1_link_60shapeimage_1_link_61shapeimage_1_link_62shapeimage_1_link_63shapeimage_1_link_64shapeimage_1_link_65shapeimage_1_link_66shapeimage_1_link_67shapeimage_1_link_68shapeimage_1_link_69shapeimage_1_link_70shapeimage_1_link_71shapeimage_1_link_72shapeimage_1_link_73shapeimage_1_link_74shapeimage_1_link_75shapeimage_1_link_76shapeimage_1_link_77shapeimage_1_link_78shapeimage_1_link_79shapeimage_1_link_80shapeimage_1_link_81shapeimage_1_link_82shapeimage_1_link_83shapeimage_1_link_84shapeimage_1_link_85shapeimage_1_link_86shapeimage_1_link_87shapeimage_1_link_88shapeimage_1_link_89shapeimage_1_link_90shapeimage_1_link_91shapeimage_1_link_92