Organisation: Technische Universität Darmstadt
Address: FG-IAS, Hochschulstr. 10, 64289 Darmstadt, Germany
Contact: Jan Peters (Tel: +49-6151-167351, mail(at)jan-peters.net)
- Oliver Kroemer (TU-Da)
- Rudolf Lioutikov (TU-Da)
- Guilherme Maeda (TU-Da)
The challenger team from the Intelligent Autonomous Systems Group (IAS) based at the Technische Universitaet Darmstadt has a clear goal of bringing advanced motor skills to robotics using techniques from machine learning and control.
The IAS team has focused on creating autonomous robots that can learn to assist humans in a variety of situations, both at the industrial scenario and at home. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to create robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the investigation of the ingredients fro such a general approach to motor skill learning, to get closer towards human-like performance in robotics. We thus focus on the solution of basic problems in robotics while developing domain-appropriate machine-learning methods.
We are particularly interested in reinforcement learning where we try to push the state-of-the-art further on and have received a tremendous support by the RL community. Much of our research relies upon learning motor primitives that can be used to learn both elementary tasks as well as complex applications.
The team has a history of successful developments and applications of motor skill learning, particularly using reinforcement learning methods, which will be essential for solving the tasks proposed by the challenges. We have proposed and validated our methods for improving the skills of robots executing highly dynamic tasks , learning to compose tasks with primitives , and also for grasping and manipulation .
Closely related to the EuRoC goals and particularly to challenge 1, the participants of the IAS team have been investigating the topic of semi-autonomous assistive robots and interaction learning. We envision a wide range of applications with industrial and manufacturing emphasis such as the assembly of products in factories and the shared control in tele-operated processes. We have recently shown how imitation learning of simple demonstrations can be sequenced to generate complex manipulation tasks . We also investigate novel methods to model collaborative interaction. Such a model is used to predict the intention and movement of the human while simultaneously generating collaborative control actions for the robot .
One of the main characteristics of the IAS lab is the vast experience with the implementation of the proposed algorithms in real robots using safe and compliant control. This expertise is of great importance for the challenge as the evaluation of the solutions for the EuRoC will be based on real demonstrations. Together with the practical expertise, we believe that the composition of the IAS team brings together the necessary skills in motor skill learning, manipulation and grasping, and collaborative and semi-autonomous robots to successfully solve the challenge 1 with novel and high-impact solutions. In particular, the IAS team has strong competences to investigate and propose solutions for the following problems proposed by the EuRoC: multi-role multi-arm collaborative robot systems, safe and effective human−robot collaboration, and compliant manipulation and grasping.
-  Peters, J.; Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients, Neural Networks, 21, 4, pp.682-97.
-  Peters, J.; Kober, J.; Muelling, K.; Kroemer, O.; Neumann, G. (2013). Towards Robot Skill Learning: From Simple Skills to Table Tennis, Proceedings of the European Conference on Machine Learning (ECML), Nectar Track.
-  Kroemer, O.; Detry, R.; Piater, J.; Peters, J. (2010). Combining Active Learning and Reactive Control for Robot Grasping, Robotics and Autonomous Systems, 58, 9, pp.1105-1116.
-  Piater, J.; Jodogne, S.; Detry, R.; Kraft, D.; Krueger, N.; Kroemer, O.; Peters, J. (2011). Learning Visual Representations for Perception-Action Systems, International Journal of Robotics Research, 30, 3, pp.294-307.
-  Kroemer, O.; Ugur, E.; Oztop, E. ; Peters, J. (2012). A Kernel-based Approach to Direct Action Perception, Proceedings of the International Conference on Robotics and Automation (ICRA)
-  Lioutikov, R.; Kroemer, O.; Peters, J.; Maeda, G. (2014). Learning Manipulation by Sequencing Motor Primitives with a Two-Armed Robot, Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS).
-  Ben Amor, H.; Neumann, G.; Kamthe, S.; Kroemer, O.; Peters, J. (2014). Interaction Primitives for Human-Robot Cooperation Tasks , Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).