Mode Switch Assistance for Robotic Manipulation

In this work, we developed an algorithm for goal disambiguation with a shared-control assistive robotic arm. Assistive systems are often required to infer human intent and this usually becomes a bottleneck for providing assistance quickly and accurately. We introduce the notion of inverse legibility in which the human-generated actions are legible enough for the robot to infer the human intent confidently and accurately.The proposed disambiguation paradigm seeks to elicit legible control

The proposed disambiguation paradigm seeks to elicit legible control commands from the human by selecting control modes that maximally disambiguate between the various goals in the scene. Simulations were conducted to study the robustness of our algorithm and the impact of the choice of confidence functions on the performance of the system. Our simulation results suggest that the disambiguating control mode computed by our algorithm produces more intuitive results when the confidence function is able to capture the “directedness” towards a goal. A pilot study was also conducted to explore the efficacy of the algorithm on real hardware. Preliminary results indicated that the assistance paradigm proposed was successful in decreasing task effort (number of mode switches) across interfaces and tasks.

Currently, a more robust inference engine based on Bayesian Inference and Dynamic Field Theory is being designed to improve the performance of the system.


Illustration of goal disambiguation along various control dimensions.
A and B indicate two point goal locations. The robot end effector is at location
C. Any motion of the end effector along the y-axis will not help the system
to disambiguate between the two goals; that is, the motion is not legible.
However, motion along the x-axis allows the goal to be inferred immediately
based on the direction in which the robot moves.