Coarse-grain Pipelining for Multiple FPGA Architectures*

Heidi Ziegler, Byoungro So, Mary Hall Pedro Diniz
University of Southern California
Information Sciences Institute
{ziegler**, bso, mhall, pedro}@isi.edu

Abstract

Reconfigurable systems, and in particular, FPGA-based custom computing machines, offer a unique opportunity to define application-specific architectures. These architectures offer performance advantages for application domains such as image processing, where the use of customized pipelines exploits the inherent coarse-grain parallelism. In this paper we describe a set of program analyses and an implementation that map a sequential and un-annotated C program into a pipelined implementation running on a set of FPGAs, each with multiple external memories. Based on well-known parallel computing analysis techniques, our algorithms perform unrolling for operator parallelization, reuse and data layout for memory parallelization and precise communication analysis. We extend these techniques for FPGA-based systems to automatically partition the application data and computation into custom pipeline stages, taking into account the available FPGA and interconnect resources. We illustrate the analysis components by way of an example, a machine vision program. We present the algorithm results, derived with minimal manual intervention, which demonstrate the potential of this approach for automatically deriving pipelined designs from high-level sequential specifications.

Keywords:

Coarse-grain Pipelining, FPGA-based Custom Computing Machines; Parallelizing Compiler Analysis Techniques.

* Funded by the Defense Advanced Research Project Agency under contract number F30603-98-2-0113.

** Funded by a Boeing Satellite Systems Doctoral Scholars Fellowship.


Copyright Notice

PDF Document, Compressed Postscript Document