HDTV UltraGrid Pioneers Look Forward to Videoconferencing in 3D

April 13, 2006

After years of demonstrations of a highly successful technique for sending high-definition video over network connections, researchers at USC's Information Sciences Institute are preparing to go 3D. But it won't be easy.

Ladan Gharai of the ISI Arlington VA campus co-developed the UltraGrid videoconferencing system with Colin Perkins, who is now at the University of Glasgow.

The pair will present a paper at the April 2006 Infocom High-Speed Networking Conference in Barcelona that puts forward a strategy for sending 3D visuals through a network that has enough bandwidth, but notes that daunting technical challenges have to be mastered to do so. Such challenges will face many high-bandwidth applications, they note, and 3D teleconferencing stands out as a good place to try to develop terabit network solutions with potentially much wider applications.

Enough bandwidth is lots of bandwidth, according to the paper, which is entitled "Holographic and 3D Teleconferencing and Visualization: Implications for Terabit Networked Applications." The existing UltraGrid system converts 60-frame per second high-definition digital video into packets that can go uncompressed over networks that have a capacity of more than 1.2 gigabits per second (Gbps).

An alternative technique abbreviates the signal slightly into just a flow just under 1 Gbps for networks limited to 1 Gbps transport, or more drastically (by 50 times) when simple transmission is all that is required, as opposed to real-time back-and-forth interaction.

To go holographic, the bandwidth requirements increase dramatically, according to the paper: by at least an order of magnitude to 10 Gig/sec, and perhaps even another order of magnitude more for uncompressed signal. But specialists are already planning for terabit networks which will have these capacities.

But even if the bandwidth exists, a number of other challenges must be met, say the Ladan and Perkins.

Autosteroscopic displays demand interlacing the output of five or more cameras, interlacing the streams tightly. To do this in real time demands extremely tight synchronization of the signals. At the same time, the Terabit network will probably work by assembling parallel components and pathways.

In this environment, they say, single-stream transport of the signals will likely be impossible. But to split the signal across multiple streams, a new Internet protocol will necessary: a "multi-stream real-time transport protocol," (MRTP) which will allow computers on the network to assemble and reintegrate the display.

Effective use of this new MRTP won't be easy with existing programming tools. "High-dimenisional video capture, compress and network transfer are inherently parallel processes," the authors write, "but existing programming languages and their supporting libraries provide only limited support fro concurrency."

Accordingly, "it is clear that new programming languages and tools are needed&hellip." These parallel programming tools, the authors say, will likely take off from some promising existing directions, notably PacLang, developed for programming network processors, specifically the Intel IXP2400 which ingeniously structures data so that each element is processed only once, even though muliple separate streams are going through separate processors.

Array processing languages, the authors say, also show promise they say. The goal: "programming languages that enforce unique ownership of data across threads of execution, provide the opportunity for automatic extraction of parallelism&hellip [and which] can be embedded in existing general purpose languages."

These tasks won't be easy, they say, but they are necessary: "to make effective use of future terabit networks," these development processes must begin, say the authors.

The research was supported by the NSF.