I am an Assistant Professor of Computer Science at Stanford University and director of the Interactive Perception and Robot Learning Lab. My research asks two central questions: What are the principles underlying robust sensorimotor coordination in humans, and how can we implement them in robots? Addressing these questions requires working at the intersection of robotics, machine learning, and computer vision. In my lab, we focus particularly on robotic grasping and manipulation.
Prospective students and post-docs, please see this page.
I want robots to be able to step out of the lab and become truly useful in the real world—helping in our homes, hospitals, warehouses, and beyond. To get there, we need to understand the principles behind robust sensorimotor coordination in humans and figure out how to build them into machines. That quest is what drives my research. Today, I’m an Assistant Professor of Computer Science in the Stanford AI Lab and director of the Interactive Perception and Robot Learning Lab, where my team works at the intersection of robotics, machine learning, and computer vision. Before coming to Stanford, I led a research group in the Autonomous Motion Department at the Max Planck Institute for Intelligent Systems—home to my favorite robot, Apollo. My path into robotics started with a Diploma in Computer Science from the Technical University of Dresden (think of it as today’s coterm) and an M.Sc. in Art and Technology from Chalmers University of Technology in Gothenburg—an incredibly fun and creatively inspiring detour. I then completed my Ph.D. in the Division of Robotics, Perception and Learning at KTH in Stockholm, where I developed new methods for multi-modal scene understanding in robotic grasping. We still have a long way to go before robots can match the adaptability and skill of human hands—but every experiment brings us closer. And in my lab, we’ll keep searching until the day our robots can take on the messy, unpredictable, wonderfully complicated world outside.
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University, where she leads research at the intersection of robotics, machine learning, and computer vision with a focus on autonomous robotic manipulation. Her lab aims to uncover the principles of robust sensorimotor coordination and implement them on real robots. Before joining Stanford, she was a group leader in the Autonomous Motion Department (AMD) at the Max Planck Institute for Intelligent Systems from 2012 to 2017. She earned her Ph.D. in the Division of Robotics, Perception, and Learning (RPL) at KTH Royal Institute of Technology in Stockholm, where her thesis introduced novel methods for multi-modal scene understanding in robotic grasping. She also studied at Chalmers University of Technology in Gothenburg and the Technical University of Dresden, receiving an M.Sc. in Art and Technology and a Diploma in Computer Science, respectively. Bohg’s work has been recognized with multiple Early Career and Best Paper awards, including the 2019 IEEE Robotics and Automation Society Early Career Award, the 2020 Robotics: Science and Systems Early Career Award, and the 2023 Sloan Research Fellowship.