The flock responds to your motion using a lightweight evolutionary loop:
Move again to re-seed the flock—every burst of motion spawns a fresh lineage.
I am a 3rd year PhD student in the Camera Culture group at the MIT Media Lab, advised by Ramesh Raskar. I also work with Brian Cheung and Tomaso Poggio's group. I received my S.M. in 2023 from MIT, where I worked on 3D vision and imaging, specifically on incorporating realistic physics of light propagation to improve neural rendering. I received my B.S.'19 ECE from University of Illinois at Urbana-Champaign. Prior to MIT, I worked on vision and robotics and built the Software 2.0 stack at a self driving startup called Optimus Ride.
My current research creates rich, physics-grounded worlds where AI agents interact and evolve, discovering visual strategies that give rise to intelligent behavior.
The goal is to understand and design intelligence in both science and AI by letting it emerge through embodiment and interaction.
Thus I am broadly interested in artificial intelligence, computer vision (and science), evolution, and reinforcement learning.
I’m also passionate about public engagement and science advocacy.
Recently, I wanted to show how AI can be more than just a tool for automation but also used to understand human and animal perception.
I've exhibited works at the MIT Museum and
the Museum of Science.
I am grateful to be the first in my extended family to be in a PhD program. To learn more about how to apply to PhD programs, please look at the Media Lab’s SOS Program and the EECS Graduate Application Assistance Program (GAAP). Reach out to chat or collaborate.