John Heidemann / Papers / Plumb: Efficient Stream Processing of Multi-User Pipelines

Plumb: Efficient Stream Processing of Multi-User Pipelines
Abdul Qadeer and John Heidemann

Citation

Abdul Qadeer and John Heidemann. Plumb: Efficient Stream Processing of Multi-User Pipelines. Software—Practice and Experience. 51, 2 (2020), 385–408. [DOI] [PDF] [alt PDF]

Abstract

Operational services run 24x7 and require analytics pipelines to evaluate performance. In mature services such as DNS, these pipelines often grow to many stages developed by multiple, loosely-coupled teams. Such pipelines pose two problems: first, computation and data storage may be duplicated across components developed by different groups, wasting resources. Second, processing can be skewed, with structural skew occurring when different pipeline stages need different amounts of resources, and computational skew occurring when a block of input data requires increased resources. Duplication and structural skew both decrease efficiency, increasing cost, latency, or both. Computational skew can cause pipeline failure or deadlock when resource consumption balloons; we have seen cases where pessimal traffic increases CPU requirements 6-fold. Detecting duplication is challenging when components from multiple teams evolve independently and require fault isolation. Skew management is hard due to dynamic workloads coupled with the conflicting goals of both minimizing latency and maximizing utilization. We propose Plumb, a framework to abstract stream processing as large-block streaming (LBS) for a multi-stage, multi-user workflow. Plumb users express analytics as a DAG of processing modules, allowing Plumb to integrate and optimize workflows from multiple users. Many real-world applications map to the LBS abstraction. Plumb detects and eliminates duplicate computation and storage, and it detects and addresses both structural and computational skew by tracking computation across the pipeline. We exercise Plumb using the analytics pipeline for \BRoot DNS. We compare Plumb to a hand-tuned system, cutting latency to one-third the original, and requiring 39% fewer container hours, while supporting more flexible, multi-user analytics and providing greater robustness to DDoS-driven demands.

Bibtex Citation

@article{Qadeer20a,
  author = {Qadeer, Abdul and Heidemann, John},
  title = {Plumb: Efficient Stream Processing of Multi-User Pipelines},
  journal = {Software---Practice and Experience},
  year = {2020},
  sortdate = {2020-09-24},
  project = {ant, lacanic, gawseed},
  jsubject = {network_big_data},
  volume = {51},
  number = {2},
  pages = {385--408},
  jlocation = {johnh: pafile},
  keywords = {big data, hadoop, plumb, DNS, streaming data,
                    data processing, workflow},
  url = {https://ant.isi.edu/%7ejohnh/PAPERS/Qadeer20a.html},
  pdfurl = {https://ant.isi.edu/%7ejohnh/PAPERS/Qadeer20a.pdf},
  doi = {10.1002/spe.2909},
  blogurl = {https://ant.isi.edu/blog/?p=1524}
}
Copyright © by John Heidemann