Edge AI at ISI: Bringing Intelligence Closer to the Action

by Julia Cohen

person sitting with AI above their hands
Image credit: Nongnuch Pitakkorn/iStock

Artificial intelligence is shifting from data centers to our devices, enabling real-time processing on everything from smartphones to satellites. This shift, known as edge AI, decentralizes AI from the large-scale data centers that traditionally powered it. While AI has existed in devices for years, edge AI represents a significant evolution by embedding intelligence directly into devices at the “edge” of the network, where data is generated.

Edge AI allows devices to make real-time decisions without depending on a constant connection to the cloud; this increases speed and enhances privacy. By keeping sensitive data closer to where it is generated, it is less exposed to those who shouldn’t have it, for example, cloud providers or bad actors that might intercept and decrypt it. As edge AI evolves, ISI researchers are advancing its capabilities with projects ranging from wildfire mapping to space exploration, helping to bring faster, smarter solutions where they’re needed most.

Tackling Complexity by Making Trade-Offs

“The biggest challenge we encounter in AI at the edge is balancing computational efficiency with the ever-growing complexity of AI models,” said J.P. Walters, Research Team Leader at ISI’s Computational Systems and Technology Division. This complexity stems from the models' parameters—the variables that algorithms adjust to learn from data. As the number of parameters increase, so do the size of the models and the computing power required to run them. In short, larger models (with more parameters) generally produce better results (e.g., more accurate inferences).

Moore’s Law, a concept introduced in 1965, predicts that the number of transistors on a microchip will double approximately every 18 to 24 months, leading to exponential increases in computing power. However, Walters emphasized that "parameters are doubling faster than Moore's Law predicts for hardware advancements." In other words, AI models are advancing faster than the hardware needed to run them. This leads to one of the fundamental challenges of edge AI: managing the trade-offs inherent in processing data locally.

Walter’s team works to balance power consumption, processing speed, and memory usage. "You can't just throw a massive AI model onto a small device without considering these factors," Walters explained. "We constantly aim to find the sweet spot where performance and efficiency meet without compromising too much on either side." 

Another Level of Complexity: Adaptability

In addition to managing hardware constraints, edge AI must adapt to changing conditions in real time. This is known as adaptive AI, and it allows models to refine their behavior based on dynamic environments.

Connor Imes, a Research Computer Scientist in ISI’s Computational Systems and Technology Division who specializes in adaptive systems, explains: “In a data center, you always have power, cooling, and high-speed network connectivity. The systems there don’t usually need to respond to dynamic changes.” He continued, “In contrast, an edge environment is out in the real world—such as your phone or car—and it's subject to all the dynamics that you and I are accustomed to dealing with and responding to in real time."

Andrew Rittenbach, a lead scientist at ISI, explained a recent ISI project in which adaptive AI was used for wildfire mapping, making on-the-fly adjustments to process data quickly while ensuring the accuracy needed for practical use. He said: “Our model takes an image that’s collected by a satellite—from a camera or radar—and as output, it turns this highly complex image into a map of ones and zeros, where ones highlight where there’s fire, and zeros are where there’s no fire.” 

In this scenario, the ISI team tested the model on previously captured satellite and drone data, demonstrating how edge AI can prioritize speed and accuracy to deliver critical updates to ground-based incident commanders, showcasing its transformative potential in disaster management.

Pushing the Limits of Edge AI: From Earth to Mars

Edge AI has also taken flight beyond Earth, with ISI researchers collaborating with NASA to validate its potential in space. In 2021, NASA launched a miniature robot helicopter on Mars, powered by a Qualcomm Snapdragon processor. "We worked on the next generation of the Snapdragon processor used on the Mars helicopter. Working with NASA, we prototyped various AI algorithms for this analog of the Mars helicopter," Walters explained.

The Snapdragon processor, which includes an embedded GPU and digital signal processors (DSPs), allowed the team to explore the limits of what it could achieve in space. "We made interesting trade-offs between the GPUs and the DSPs and found that we could eke out more performance from the DSPs than the GPU, which was a bit of a surprise," Walters noted, demonstrating how ISI researchers are pushing edge AI to its limits, even in the harshest and most remote environments.

Looking Forward

As edge AI continues to advance, it promises to play an even larger role in our everyday lives. By bringing computational power closer to the data source, edge AI can improve the efficiency and effectiveness of technologies across a wide range of industries, from autonomous vehicles to wildlife monitoring. This shift will not only make AI smarter and faster but also extend its reach to areas where cloud connectivity is limited or non-existent.

Want to write about this story?