Cloudwalkers Panel Explores What’s Next in Tech

On March 11, 2025, an extraordinary group of internet pioneers came together to discuss the future of technology. The conversation followed the premiere of Cloudwalkers, a documentary chronicling the history of USC’s Information Sciences Institute (ISI)—from its early days as a small research lab to a force of innovation that helped shape the modern internet.
After the screening, visionaries featured in the film spoke to a packed audience in USC’s Ginsburg Hall: Vint Cerf, co-creator of TCP/IP; Paul Mockapetris, inventor of the DNS; Eve Schooler, an IoT pioneer; Kevin Knight, an AI and machine translation pioneer; and Steve Crago, a microelectronics leader and associate director of ISI. Here are key takeaways from their discussion.
Cloudwalkers, directed by Daniel Druhora, is available to stream on Amazon Prime.
How can we use technology responsibly?
Vint Cerf: Perhaps the most important takeaway that I have from this documentary is the absolute necessity for pursuing research that will improve our ability to make use of these technologies without damaging our society. It should be very clear to all of you that in 2025, the internet, the World Wide Web, and all the applications that are sitting on it have become central to a very significant fraction of daily life and work around the world. The problem we run into, however, is that this same technology, which is amplifying and enabling useful and productive work, is also amplifying and enabling harmful behaviors.
We need to teach people—young people and old people and everybody in between—how to think critically about what they're seeing and hearing. It's not easy. You have to think hard. You have to ask questions: where did this information come from? Is there any supporting or corroborating evidence for claims that are being made? This is the price we pay for having access to this enormous amount of information coming from multi-billions of people.
What’s the next big leap in AI and language technology?
Kevin Knight: There’s a couple. Just like we built a chess GrandMaster, we're going to build a mathematics GrandMaster—better than any human mathematician. That's going to lead to fundamental breakthroughs that apply to all sciences.
Another thing is, I work with AI across a range of projects. At first, I'm the boss. I parcel out little tests. Later, the AI starts suggesting the next step. Eventually, I feel like it starts guiding the project and just asks me for a little bit of help. At some point, AI will take over all projects. We're seeing that happening already.
Another big thing that's coming is that AI is going to need a bank account to do these projects. It needs resources—just like you need resources to do your projects. You'll give it a certain amount of money, and it might say, “I don't have enough money for this project, so I'm going to go out and earn more.” Now, you’ve got an AI with its own bank account and a very reasonable sub task, which is to turn $1,000 into $5,000. Once they become a GrandMaster at that sort of task, we're in a whole new ball game.
ISI helped democratize chip design with its MOSIS service. What lessons from MOSIS apply to the AI chip revolution we're seeing now?
Steve Crago: I think there are a lot of lessons, and they apply not only to the AI chip revolution, but to the semiconductor industry in general because we’re nearing the end of Moore's Law. We’re still getting faster and smaller silicon, but things are changing. One of the things that MOSIS did in its original carnation was allow designers and people with creative ideas to get access to the chip foundries so that their ideas could be realized. The same thing is happening now with MOSIS 2.0.
As we look for replacements for the traditional silicon-based computing model, we need new ways of connecting designers and researchers with creative ideas. MOSIS 2.0 impacts how researchers can get their hardware prototypes built—and lets them build quickly. It’s important to iterate on your ideas, and have lots of ideas that fail, before finding the ones that work.
What cutting-edge research isn't getting enough attention?
Eve Schooler: The thing that's really captured me in the past 10 years is sustainability. I feel that sustainability—much like security—is one of the design principles that is often overlooked at the outset. If you open up any newspaper right now, you will read that because of the popularity of AI, we can't build data centers fast enough. Yet we also can't deploy renewable energy quickly enough to power them. Almost all of our renewable energy credits in the United States are already being used by our existing data centers. And if the trajectory for energy use continues at this pace, we'll need to completely rethink how we generate and consume electricity.
This is why I think we need to be teaching sustainability, whether it's sustainable networks—which is my focus—or sustainable AI, hardware design, and circularity of materials. We need to be going in this direction in regards to teaching students and embedding sustainability into the culture of research institutes and corporations.
Paul Mockapetris: I’m interested in trying to figure out how we're going to regulate these technologies. We've had several examples that are massive failures. For example, I think that most people don't understand how ad networks work to invade your privacy, and if you did, you'd be horrified. I know how it works, but I don't know how to do anything about it.
Every time you move your cursor over an ad, it gets distributed 900 times to, among other places, several sovereign nations that find out where your cursor moved. Today, that stuff's running all the time, it's very scary, and we haven't been able to regulate it. I think figuring out how to regulate these technologies, or how to perhaps bend the arc where the common good is being taken into consideration, is something that I would really like to see happen.