USC ISI Research Shows a Promising Future for Animal-like Computer Vision  

by Avery Anderson

Grafissimo/Getty Images

When you pull out your smartphone to take a picture of a magical sunset or a loved one’s smile, your camera freezes what you see in a moment in time forever. Even though a camera lens can capture potent colors, textures, and remarkable details in one click, it will always fall short to the remarkable processing performance of a human eye.

Humans and animals have the ability to not only see, but to perceive. Our eyes–or rather our retinae–are processing information about our surroundings in real-time, and sending signals back to our brain. Through cameras, robots are able to capture events happening around them like us, but they haven’t been able to perceive like we do–until now. The latest ISI research now available on biorxiv, IRIS: Integrated Retinal Functionality in Image Sensors, bridges this gap and makes retina-inspired computer vision a tangible reality.

The IRIS (Integrated Retinal Functionality in Image Sensors) project was born out of the conjecture that embedding retina-like computations into image sensing camera technology could allow machines to see like human and animal eyes do.

“Turns out that the animal retina is a highly evolved processing unit that performs complex computations important for survival, such as escaping a predator,” explains Akhilesh Jaiswal, research assistant professor of electrical and computer engineering at USC Viterbi and researcher at ISI.

For example, the human eye can recognize when an object is moving towards us and quickly triggers a response. Similarly, it can differentiate moving objects in moving backgrounds. IRIS technology advances visual based decision making in machines by mimicking known biological retinal processes.

Recognizing the multi-disciplinary nature of the project, immediately after its conceptualization, Dr. Jaiswal sought help from retinal neuroscientist expert, Dr. Gregory Schwartz, Derrick T. Vail Professor of Ophthalmology, Associate Professor of Ophthalmology, Northwestern University to model the processing in downstream retinal circuits and leveraged potential 3D stacking of semiconductor chips to chalk out a pathway towards developing new retina inspired sensors. These circuits could form the next generation of image sensor technology already built into machines or robots, who can make decisions based on what they’re seeing, as we do.  “Evolution has done an amazing job of optimizing animal eyes for high performance vision. Designing new sensors that draw inspiration from the biological retina has the potential to initiate a paradigm shift in machine vision,” adds Gregory Schwartz, associate professor of neuroscience at Northwestern University.

One aspect that made this project particularly complex is that it involved the overlap of various scientific spheres. Ajey Jacob, Director of the Application Specific Integrated Circuits Lab at ISI, emphasized the interdisciplinary nature of the research. “We had to build a solution that combines understanding of living state physics, biological computation, electrical engineering, and computer science algorithms in a cohesive manner”.

As a result, the ISI research team collaborated with experts in a plethora of fields. “One of the major challenges involved building a cohesive team of individuals who understood, the biological computation of the retina, the corresponding electrical circuits and hardware, and the appropriate algorithm necessary to accomplish the task,”Jacob added.

This is just the tip of the iceberg—ISI’s IRIS project acts as a catalyst for future developments in vision based decision making in machines and robots. “This paves the way for unconventional vision learning in resource/bandwidth constrained environments and strong step forward in our community’s pursuit of having robust, and energy efficient decision making” explains Maryam Parsa, assistant professor of electrical and computer engineering at George Mason University, bio-inspired algorithmic expert on the team.

Doctoral students Zihan Yin and Md Abdullah-al-Kaiser were also part of this project.

Jaiswal highlighted that the implications of these findings are remarkable. “Our present work shows a scalable, commercially manufacturable pathway towards design of novel IRIS cameras that is adequate as a solid starting point and has a wider application in vision based decision making for applications ranging from high-speed autonomous drones, robotics, and self-driving cars.”

Up next in the IRIS project is to further expand on this research–and explore building an end-to-end prototype that assigns a machine or robot a complex visual task to complete using signals generated from IRIS cameras. This new technology is only the beginning of a future with improved machine vision that has the potential to transform the artificial intelligence and robotics space.

Published on September 29th, 2022

Last updated on September 29th, 2022

Want to write about this story?