Authors:
Jakob Nazarenus
1
;
Simon Reichhuber
2
;
Manuel Amersdorfer
3
;
Lukas Elsner
4
;
Reinhard Koch
1
;
Sven Tomforde
2
and
Hossam Abbas
4
Affiliations:
1
Multimedia Information Processing Group, Kiel University, Hermann-Rodewald-Str. 3, 24118 Kiel, Germany
;
2
Intelligent Systems, Kiel University, Germany, Hermann-Rodewald-Str. 3, 24118 Kiel, Germany
;
3
Digital Process Engineering Group, Karlsruhe Institute of Technology, Hertzstr. 16, 76187 Karlsruhe, Germany
;
4
Chair of Automation and Control, Kiel University, Kaiserstr. 2, 24143 Kiel, Germany
Keyword(s):
Vision and Perception, Robot and Multi-Robot Systems, Simulation, Neural Networks, Classification, Autonomous Systems.
Abstract:
In many applications, robotic systems are monitored via camera systems. This helps with monitoring automated production processes, anomaly detection, and the refinement of the estimated robot’s pose via optical tracking systems. While providing high precision and flexibility, the main limitation of such systems is their line-of-sight constraint. In this paper, we propose a lightweight solution for automatically learning this occluded space to provide continuously observable robot trajectories. This is achieved by an initial autonomous calibration procedure and subsequent training of a simple neural network. During operation, this network provides a prediction of the visibility status with a balanced accuracy of 90% as well as a gradient that leads the robot to a more well-observed area. The prediction and gradient computations run with sub-ms latency and allow for modular integration into existing dynamic trajectory-planning algorithms to ensure high visibility of the desired target.