Authors:
Philippe Pérez de San Roman
1
;
Pascal Desbarats
2
;
Jean-Philippe Domenger
2
and
Axel Buendia
3
Affiliations:
1
Université de Bordeaux, Bordeaux, France, LaBRI, Talence, France, ITECA and Angoulłme/Paris France
;
2
Université de Bordeaux, Bordeaux, France, LaBRI, Talence and France
;
3
ITECA, Angoulłme/Paris France, SpirOps, Paris, France, CNAM-CEDRIC, Paris and France
Keyword(s):
Object Localization, Deep Learning, Metric Loss.
Related
Ontology
Subjects/Areas/Topics:
Applications and Services
;
Computer Vision, Visualization and Computer Graphics
;
Enterprise Information Systems
;
Human and Computer Interaction
;
Human-Computer Interaction
Abstract:
Localizing objects is a key challenge for robotics, augmented reality and mixed reality applications. Images taken in the real world feature many objects with challenging factors such as occlusions, motion blur and changing lights. In manufacturing industry scenes, a large majority of objects are poorly textured or highly reflective. Moreover, they often present symmetries which makes the localization task even more complicated. PoseNet is a deep neural network based on GoogleNet that predicts camera poses in indoor room and outdoor streets. We propose to evaluate this method for the problem of industrial object pose estimation by training the network on the T-LESS dataset. We demonstrate with our experiments that PoseNet is able to predict translation and rotation separately with high accuracy. However, our experiments also prove that it is not able to learn translation and rotation jointly. Indeed, one of the two modalities is either not learned by the network, or forgotten during
training when the other is being learned. This justifies the fact that future works will require other formulation of the loss as well as other architectures in order to solve the pose estimation general problem.
(More)