Find a summary paper here.

[Access our publications here.]

Autonomy that interacts —that collaborates and coexists with humans— is becoming more and more functional, making its way out of research labs and into industry. Our goal is to weave interaction into the very fabric of this autonomy. I envision riding in a self-driving car as it is effectively coordinating with other drivers on the road and with pedestrians. I envision people with disabilities seamlessly operating assistive devices to thrive independently. And I envision collaborative robots in the home or in the factory helping us with our tasks and even gently guiding us to better ways of achieving them.

Autonomously generating such behavior demands a deep understanding of interaction. We work at the intersection of robotics, machine learning, and cognitive science with the goal of enabling this understanding.

Despite successes in robot function, robot interaction with people is lagging behind. We can plan or learn to autonomously navigate and even manipulate in the physical world, and yet when it comes to people, we rely on hand-designing strategies -- like an autonomous car needing to inch forward at a 4-way stop by a particular amount in order to be allowed by other people to pass through, or a home robot needing to tilt an object in a particular way to hand it over to a person. What makes interaction more difficult is that it is no longer about just one agent and the physical world. It involves a second agent, the human, with physical and internal state that should influence what the robot does.

Our thesis is that in order to autonomously generate behavior that works for interaction and not just function, robots need to formally integrate human state into their models, learners, and planners. We formalize interaction by building on game theory and the theory of dynamical systems, focusing on finding or learning models of human behavior, and computing with them tractably in continuous state and action spaces.

Accounting for Physical Human State

Robots will need to share physical space with people: share the road, a workspace, a home. Today, we, humans, tend to merely be obstacles to robots. For instance, cars estimate a human-driven vehicle's motion and plan to stay out of its way; factory robots detect workers and plan around them. This model of people leads to \emph{avoidance}, but not to \emph{coordination}. It leads to ultra-conservative robots, always afraid of being in the way, responding to what people do but not expecting people to respond back. But that is not actually true: robot actions actually influence the physical actions that a person ends up taking, be it on the road or in a collaborative manipulation task. Our goal is to enable robots to leverage this influence for more effective coordination.

Viewed from this lens, interaction becomes an underactuated dynamical system. The robot's actions no longer affect only the robot's physical state, they indirectly affect the human's as well. A core part of our research is dedicated to formulating interaction this way, learning a dynamics model for the system -- a model for human behavior in response to robot actions, and finding approximations that make planning in such a system tractable despite the continuous state and action spaces. We have shown that this makes robots less defensive, e.g. a car will sometimes merge in front of a human-driven vehicle, expecting that they will accommodate the merge by slightly slowing down. More interestingly, we have found that coordination strategies emerge from planning in this system that we would otherwise need to hand-craft, like inching backwards at a 4-way stop to encourage a human to proceed first.


  • We used this formalism to study the case in which the human and the robot have different objectives (e.g. an autonomous car interacting with other drivers on the road), showing that it makes the robot more efficient and better at coordinating with people.
  • For collaborative tasks, like robot to human handovers, we have shown that this formalism can be used to enable the robot to generate actions that help the person end up with a better plan for the task.
  • We showed that the robot can leverage the person's reactions to its own actions to actively estimate the human behavior model, e.g. a car will plan to nudge into a person's lane or inch forward at an intersection to probe whether the person is distracted.

Accounting for Internal Human State

In some cases, the robot needs to go beyond incorporating the physical human state into its model, and start accounting for the internal state that the robot cannot observe, like the human's preferences or their beliefs about the robot.
  • Our previous work enabled robots to estimate people's goals from their motion, and to convey their own goals to people via legible motion.
  • We are investigating the plethora of communication implicit in action: how motion and more generally robot action can convey not just goals, but properties, objectives, capabilities. We worked on inferring an operator's desired constraints during shared autonomy, and are currently working on expressing emotion under our CITRIS grant.
  • We are working on coordination of task plans: anticipating the human's plan and adapting to it, as well as generating robot plans that are easy to anticipate. [in progress]
  • Collaborative Learning

    Typically, robot learning assumes a robot passively observing a human expert act to perform the task. We are instead formulating learning as an interaction between a teacher a learner.
    • We developed a two-player partial information game formulation that enables the robot to account for the fact that humans act differently when they teach than when they perform the task. We call it Collaborative Inverse Reinforcement Learning.
    • We are working on bidirectional learning, in which the robot also needs to teach the human -- about its capability, and about what it has learned. [in progress]
    • We developed tools for reducing the burden on the human teacher by only asking for examples when they are actually needed: not asking when the robot is confident, and not asking when analytical models are still useful.