UNIBO-DEI-DISI

Organisation: DEI - Universita di Bologna

Website: http://www-lar.deis.unibo.it/

Address: Viale del Risorgimento 2, 40136 Bologna, Italy

Contact: Prof. Claudio Melchiorri (Tel: +39 051 20 93034, claudio.melchiorri(at)unibo.it )

 

Other Staff:

  • Federico Tombari (DISI- CVLab)
  • Gianluca Palli (DEI- LAR)
  • Daniele De Gregorio (DISI- CVLab)
  • Alberto Pepe (DEI- LAR)
  • Lorenzo Moriello (DEI- LAR)
  • Umberto Scarcia (DEI- LAR)
  • Luigi Di Stefano (DISI- CVLab)

 

Team Profile:

The team is a cross-disciplinary group including people from the Laboratory of Automation and Robotics (LAR) at the Department of Electrical, Electronic and Information Engineering (DEI) as well as from the Computer Vision Lab (CVLab) at the Department of Computer Science and Engineering (DISI) of the University of Bologna. The main expertise of the team components is focused, on the one hand, on the field of robotic manipulator control, on the other on computer vision and robotic perception.

In particular, the team members from LAR have wide experience in the field of trajectory generation and control for industrial robots and robots with variable stiffness joints, in the design and control of dexterous manipulation devices like robotic hands, in telemanipulation and haptic interfaces. The team members from CVLAB are particularly expert in fundamental computer vision tasks such as object detection, tracking and feature matching, as well as specific robotic perception tasks such as 3D data representation, 3D object recognition, point cloud registration, 3D semantic segmentation and SLAM.The UNIBO-DEI-DISI team is characterized by a strong complementarity of the knowledge needed to tackle the relevant problems associated to Challenge 2. Indeed, this challenge requires the team to study and solve two main aspects: on one side, the identification of the scene and task-relevant parameters by means of the vision system, this problem will be faced by the members with experience in robotic perception (CVLAB), and, on the other side, the planning and control of the robot actions to accomplish the desired tasks, aspect that will be addressed by the members with experience in robotic manipulator control (LAR). In particular, the people from CVLAB will exploit specific techniques relying on both object shape and color [1][2] to recognize the objects of interest present in the scene as for accomplishing task 1, as well as to reliably estimate the 6-Degree-of-Freedom pose of the objects as for the goals related to tasks 2 and 3. As for task 4, the vision system will also be exploited to detect the obstacles in the scene, providing a 3D map of the forbidden workspace regions, whereas in task 5 the visual data will be used together with specific techniques [1] to localize the objects to be assembled as well as the walls delimiting the workspace and the desired object location. In task 6, the vision system will be used to estimate the object pose while on the conveyor belt, while a specific tracking method will be deployed to estimate the conveyor speed and to track the objects during the pick and place operations.

The people from LAR will exploit their expertise in trajectory planning for robotic manipulators [3] together with techniques for the evaluation of the grasp conditions exploiting the manipulator joint torque sensors and the table surface friction to solve the pick and place operation in task 1 and 2 [4]. In task 3, the motion of the arm base over the table surface will be exploited to both approach the objects, in case the object is out from the arm workspace, and to reduce the task execution time and the effort of the arm by optimizing the manipulability ellipsoids with respect to the planned trajectory and the object position relative to the arm. The arm redundancy (both in term of base motion and joint space redundancy) will be exploited in task 4 to avoid the obstacles in the scene and to satisfy at the same time the primary task, i.e. the pick and place operations [5]. In task 5 the manipulator joint torque sensors will be exploited to implement a workspace impedance or hybrid force/position controller for the execution of the assembly task [6]. In task 6 the manipulator trajectory will be designed according to the motion of the conveyor belt and the grasp conditions will be evaluated exploiting the manipulator joint torque sensors and the surface friction.

  1. A. Aldoma, F. Tombari, L. Di Stefano, M. Vincze - A global hypothesis verification method for 3D object recognition, European Conference of Computer Vision (ECCV), 2012
  2. RGB-D Object Recognition from Kinect data www.youtube.com/watch
  3. L. Biagiotti, C. Melchiorri - Trajectory Planning for Automatic Machines and Robots - Springer-Verlag Heidelberg, ISBN: 9783540856283, 2008.
  4. G. Palli, C. Melchiorri, 11 Interaction Force Control of Robots with Variable Stiffness Actuation 11 ,18th IFAC World Congress 2011, Milan, Italy, August 28 - September 2, 2011.
  5. G. Palli, C. Melchiorri, 11 On the Control of Redundant Robots with Variable Stiffness Actuation 11 , IEEE/RSJ International Conference on Intelligent Robots and Systems 2012, Algarve, Portugal, October 7-12, 2012.
  6. G. Palli, C. Melchiorri, 11 Output-Based Control of Robots with Variable Stiffness Actuation 11 , Journal of Robotics, doi: 10.1155/2011/735407, 2011.

Videos

Recognition

 

Go back

Grasping Pipeline



Partner of

ICT Innovation for Manufacturing SMEs

www.i4ms.eu

News

30.01.2017

Best Drone-based Solution

2017-01-30 10:30 CET

On 2017-01-24 the EuRoC Team "GRVC-CATEC" was awarded the “Best Drone-based Solution” at the Eu parlament in Brussels. For more information, click...read more

Category: eurocNews

Displaying results 1 to 1 out of 19
<< First < Previous 1-1 2-2 3-3 4-4 5-5 6-6 7-7 Next > Last >>

Events

15.05.2017

EuRoC Showcasing Workshop

Date: 2017-05-15 till 2017-05-17

Type: Evaluation Meeting (partialy closed to the public)

Venue: Fraunhofer IPA, Stuttgart, Germany read more

Category: eurocEvents, eurocFutureEvents

September 2017
Mo
Tu
We
Th
Fr
Sa
Su
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30