UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets | SpringerLink
Skip to main content

UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11844))

Included in the following conference series:

Abstract

Large amounts of data have become an essential requirement in the development of modern computer vision algorithms, e.g. the training of neural networks. Due to data protection laws, overflight permissions for UAVs or expensive equipment, data collection is often a costly and time-consuming task. Especially, if the ground truth is generated by manually annotating the collected data. By means of synthetic data generation, large amounts of image- and metadata can be extracted directly from a virtual scene, which in turn can be customized to meet the specific needs of the algorithm or the use-case. Furthermore, the use of virtual objects avoids problems that might arise due to data protection issues and does not require the use of expensive sensors. In this work we propose a framework for synthetic test data generation utilizing the Unreal Engine. The Unreal Engine provides a simulation environment that allows one to simulate complex situations in a virtual world, such as data acquisition with UAVs or autonomous diving. However, our process is agnostic to the computer vision task for which the data is generated and, thus, can be used to create generic datasets. We evaluate our framework by generating synthetic test data, with which a CNN for object detection as well as a V-SLAM algorithm are trained and evaluated. The evaluation shows that our generated synthetic data can be used as an alternative to real data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    www.unrealengine.com.

  2. 2.

    http://unrealgt.github.io.

References

  1. Bridson, R.: Fast Poisson disk sampling in arbitrary dimensions. In: Proceedings of ACM SIGGRAPH Sketches (2007)

    Google Scholar 

  2. Dang, Q., Yin, J., Wang, B., Zheng, W.: Deep learning based 2D human pose estimation: a survey. Tsinghua Sci. Technol. 24(6), 663–676 (2019)

    Article  Google Scholar 

  3. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of Annual Conference on Robot Learning, pp. 1–16 (2017)

    Google Scholar 

  4. Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)

    Article  Google Scholar 

  5. Eberly, D.: 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics. CRC Press, Boca Raton (2006)

    Google Scholar 

  6. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  7. Guo, Y., Liu, Y., Georgiou, T., Lew, M.S.: A review of semantic segmentation using deep neural networks. Int. J. Multimedia Inf. Retrieval 7(2), 87–93 (2018)

    Article  Google Scholar 

  8. Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D object representations for fine-grained categorization. In: Proceedings of IEEE International Conference on Computer Vision Workshops, pp. 554–561 (2013)

    Google Scholar 

  9. Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Proceedings of European Conference on Computer Vision, pp. 740–755 (2014)

    Google Scholar 

  10. Liu, L., et al.: Deep learning for generic object detection: a survey. arXiv preprint arXiv:1809.02165 (2018)

  11. Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  12. Qiu, W., Yuille, A.: UnrealCV: connecting computer vision to unreal engine. In: Proceedings of European Conference on Computer Vision, pp. 909–916 (2016)

    Google Scholar 

  13. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)

  14. Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 102–118. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_7

    Chapter  Google Scholar 

  15. Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Proceedings of Field and Service Robotics, pp. 621–635 (2018)

    Google Scholar 

  16. Tremblay, J., et al.: Training deep networks with synthetic data: bridging the reality gap by domain randomization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1082–10828 (2018)

    Google Scholar 

  17. Zheng, L., Yang, Y., Hauptmann, A.G.: Person re-identification: past, present and future. arXiv preprint arXiv:1610.02984 (2016)

Download references

Acknowledgment

This work has received funding from the European Union’s Horizon 2020 research and innovation program in the context of the VICTORIA project under grant agreement No. 740754.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Pollok .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pollok, T., Junglas, L., Ruf, B., Schumann, A. (2019). UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2019. Lecture Notes in Computer Science(), vol 11844. Springer, Cham. https://doi.org/10.1007/978-3-030-33720-9_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33720-9_52

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33719-3

  • Online ISBN: 978-3-030-33720-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics