Department for Automation, Biocybernetics and Robotics





Robot motion synthesis through human visuo-motor learning

Project duration: 2009 - 2012
Project area: Automation and Intelligent Control of Robots
Project type: Research
Project funding: Research Agency
Project leader: Jan Babič
Coworkers:
Luka Peternel

Abstract

Development of a novel framework for robot motion synthesis that exploits human visuo-motor learning capacity.

Project description

One of the most straightforward methods to achieve human-like motion for humanoid robots is to transfer the motion from a human demonstrator. One can capture the desired motion of a human subject and map this motion to the kinematical structure of the robot. Due to the different dynamical properties of the humanoid robot and the human demonstrator, the success of this approach with regard to the stability of the humanoid robot depends on the ad-hoc mapping implemented by the researcher. Majority of everyday human movements are not statically stable slow motions but are fast dynamically stable motions. To transfer such motions to the humanoid robot we have to consider, besides kinematics of the motion, some other important parameters that are crucial for balancing and dynamic stability of the robot. As it is usually the case, the researcher works out the details of the mapping so that the transferred action is stable on the robot. Here we propose a very different approach. Our proposal uses the human demonstrator’s real-time action to control the humanoid robot and to consecutively build an appropriate mapping between the human and the humanoid robot. This effectively creates a closed loop system where the human subject actively controls the humanoid robot motion in real time with the requirement that the robot stays stable. This requirement can be easily satisfied by the human subject because of the human brain ability to control novel tools. The robot that is controlled by the demonstrator can be seen akin to tool such as a car or a snowboard when one uses it at the first time. This setup requires the humanoid robot’s state to be transferred to the human as the feedback information. We envision different types of visual feedback to provide the demonstrator either with the humanoid robot's view or with the view of the humanoid robot in the isometric projecton. Besides, a mechanical type of feedback that enables the demonstrator to feel the balance of the humanoid robot while controlling it will be established.

Notes

In collaboration with Erhan Oztop, ATR Brain Information Communication Research Laboratory Group

Publications

Article

  • Babič J., Hale J., Oztop E., Human sensorimotor learning for humanoid robot skill synthesis, 2011. [More]