|
This work package will provide the methods for recognizing and interpreting the activities of a human partner and for transferring human skills to robots, as required for intuitive multimodal interaction and implicit coordination between a robot co-worker or companion and a human user. The tasks in this work package address the problems of gesture/motion/interaction recognition (using stochastic models, e.g., HMM, that capture both the spatial and temporal variability and by finding the most likely primitive), future movement prediction (both concerning the prediction of discrete motion primitives and of continuous trajectories), as well as context-based task reasoning (using a joint probabilistic model, e.g., BLN, and a task model associating objectives, actions, and perception). |