Abstract
Visually detecting a well-defined object like a soccer ball should be a simple problem. Under good lighting conditions this problem can be claimed to be solved, but unfortunately, the lighting conditions are not always optimal. In those circumstances, it is valuable to have a shape and reflectance model of the ball, to be able to predict how its appearance changes if the lighting changes. This is a prerequisite for playing soccer outside, with direct sunlight and clouds. The predicted appearance will be used to fine-tune an existing ball detection algorithm, based on the classic Yolo algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
https://github.com/AlexeyAB, maintained by Alexey Bochkovsky.
References
Bolt, L., Klein Gunnewiek, F., Lekanne gezegd Deprez, H., van Iterson, L., Prinzhorn, D.: Dutch Nao Team - Technical report, December 2022
Borkman, S., et al.: Unity perception: generate synthetic data for computer vision, July 2021. arXiv preprint 2107.04259
Chown, E., Lagoudakis, M.G.: The standard platform league. In: Bianchi, R., Akin, H., Ramamoorthy, S., Sugiura, K. (eds.) RoboCup 2014: Robot World Cup XVIII. RoboCup 2014. LNCS, vol. 8992, pp. 636–648. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18615-3_52
Fiedler, N., Bestmann, M., Hendrich, N.: Imagetagger: an open source online platform for collaborative image labeling. In: Holz, D., Genter, K., Saad, M., von Stryk, O. (eds.) RoboCup 2018: Robot World Cup XXII. RoboCup 2018. LNCS, vol. 11374, pp. 162–169. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27544-0_13
Hayes-Roth, B.: Architectural foundations for real-time performance in intelligent agents. Real-Time Syst. 2(1–2), 99–125 (1990)
Hess, T., Mundt, M., Weis, T., Ramesh, V.: Large-scale stochastic scene generation and semantic annotation for deep convolutional neural network training in the RoboCup SPL. In: Akiyama, H., Obst, O., Sammut, C., Tonidandel, F. (eds.) RoboCup 2017: Robot World Cup XXI. RoboCup 2017. LNCS, vol. 11175, pp. 33–44. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00308-1_3
Jiao, L., et al.: A survey of deep learning-based object detection. IEEE Access 7, 128837–128868 (2019)
Kahlefendt, C.: A Comparison and Evaluation of Neural Network-based Classification Approaches for the Purpose of a Robot Detection on the Nao Robotic System. Master’s thesis, Technische Universität Hamburg-Harburg, April 2017
Li, C., et al.: Yolov6 v3.0: a full-scale reloading, January 2023. arXiv 2301.05586
Li, G., Song, Z., Fu, Q.: A new method of image detection for small datasets under the framework of yolo network. In: 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 1031–1035, October 2018
Lin, T.Y., et al.: Microsoft coco: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision – ECCV 2014. ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Martin-Brualla, R., Radwan, N., Sajjadi, M.S.M., Barron, J.T., Dosovitskiy, A., Duckworth, D.: Nerf in the wild: neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7210–7219, June 2021
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)
Monté, X.: Neural factorization of shape and reflectance of a football under an unknown illumination. Bachelor thesis, University of Amsterdam, February 2023
Mungan, C.: Bidirectional reflectance distribution functions describing first-surface scattering. AFOSR Final Report for the Summer Faculty Research Program (Summer 1998)
Narayanaswami, S.K., et al.: Towards a real-time, low-resource, end-to-end object detection pipeline for robot soccer. In: Eguchi, A., Lau, N., Paetzel-Prusmann, M., Wanichanon, T. (eds.) RoboCup 2022:. RoboCup 2022. LNCS, vol. 13561, pp. 62–74. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-28469-4_6
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, January 2016
Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7263–7271, July 2017
Redmon, J., Farhadi, A.: Yolov3: an incremental improvement, April 2018. arXiv 1804.02767
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4104–4113, June 2016
Specchi, G., et al.: Structural pruning for real-time multi-object detection on NAO robots. In: RoboCup 2023: Robot World Cup XXVI, July 2023
Tang, J., et al.: Delicate textured mesh recovery from nerf via adaptive surface refinement, March 2023. arXiv preprint 2303.02091
Terven, J., Cordova-Esparza, D.: A comprehensive review of yolo: from yolov1 to yolov8 and beyond, April 2023. arXiv 2304.00501
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: Scaled-yolov4: scaling cross stage partial network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13029–13038, June 2021
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: Yolov7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, July 2022. arXiv 2207.02696
van der Weerd, R.: Real-time object detection and avoidance for autonomous NAO robots performing in the standard platform league. Project report, University of Amsterdam, July 2021
Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M., Lee, B.: A survey of modern deep learning based object detection models. Digit. Signal Process. 126, 103514 (2022)
Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: Nerfactor: neural factorization of shape and reflectance under an unknown illumination. ACM Trans. Graph. 40(6) (2021)
Zou, Z., Chen, K., Shi, Z., Guo, Y., Ye, J.: Object detection in 20 years: a survey. Proc. IEEE 111(3), 257–276 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Monté, X., van der Kaaij, J., van der Weerd, R., Visser, A. (2024). Using Neural Factorization of Shape and Reflectance for Ball Detection. In: Buche, C., Rossi, A., Simões, M., Visser, U. (eds) RoboCup 2023: Robot World Cup XXVI. RoboCup 2023. Lecture Notes in Computer Science(), vol 14140. Springer, Cham. https://doi.org/10.1007/978-3-031-55015-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-55015-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-55014-0
Online ISBN: 978-3-031-55015-7
eBook Packages: Computer ScienceComputer Science (R0)