Deep learning neural networks for vision-based object detection

A flexible tool using a RGB-D visual sensor, and its point cloud to locate an object pose based on a mathematical model.

A feature-based method (ORB algorithm), and a method to estimate parameters of a mathematical model (RANSAC) are used to find the object position in the RGB image and find the pixels coordinates in the point cloud.

After the object detection, the grasp task is performed based on the estimated position by the point cloud and the robotic arm position.

- ORB invariance to rotation, it is possible to detect the object in diverse angles.
- Object detection by features can be a fast and reliable way, with only one image of the object being required.
- The point cloud can be a simple but accurate way of finding the object’s coordinates in space without the need for matrix arithmetic and calibration, which provides an increase in system performance and accuracy.
- Hamming Distance is used to match the keypoints and descriptors
- RANSAC is used to find the homography between the sample image keypoints and descriptors with the camera image keypoints and descriptors.

Contact

INESC P&D Brasil

andre.gustavo@ufba.br