{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T14:50:16Z","timestamp":1740149416231,"version":"3.37.3"},"reference-count":63,"publisher":"MDPI AG","issue":"13","license":[{"start":{"date-parts":[[2020,7,2]],"date-time":"2020-07-02T00:00:00Z","timestamp":1593648000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"Behavioural research of pigs can be greatly simplified if automatic recognition systems are used. Systems based on computer vision in particular have the advantage that they allow an evaluation without affecting the normal behaviour of the animals. In recent years, methods based on deep learning have been introduced and have shown excellent results. Object and keypoint detector have frequently been used to detect individual animals. Despite promising results, bounding boxes and sparse keypoints do not trace the contours of the animals, resulting in a lot of information being lost. Therefore, this paper follows the relatively new approach of panoptic segmentation and aims at the pixel accurate segmentation of individual pigs. A framework consisting of a neural network for semantic segmentation as well as different network heads and postprocessing methods will be discussed. The method was tested on a data set of 1000 hand-labeled images created specifically for this experiment and achieves detection rates of around 95% (F1 score) despite disturbances such as occlusions and dirty lenses.<\/jats:p>","DOI":"10.3390\/s20133710","type":"journal-article","created":{"date-parts":[[2020,7,3]],"date-time":"2020-07-03T10:51:20Z","timestamp":1593773480000},"page":"3710","source":"Crossref","is-referenced-by-count":24,"title":["Panoptic Segmentation of Individual Pigs for Posture Recognition"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5118-145X","authenticated-orcid":false,"given":"Johannes","family":"Br\u00fcnger","sequence":"first","affiliation":[{"name":"Department of Computer Science, Kiel University, 24118 Kiel, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4769-9539","authenticated-orcid":false,"given":"Maria","family":"Gentz","sequence":"additional","affiliation":[{"name":"Department of Animal Sciences, Livestock Systems, Georg-August-University G\u00f6ttingen, 37075 G\u00f6ttingen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9761-0291","authenticated-orcid":false,"given":"Imke","family":"Traulsen","sequence":"additional","affiliation":[{"name":"Department of Animal Sciences, Livestock Systems, Georg-August-University G\u00f6ttingen, 37075 G\u00f6ttingen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4398-1569","authenticated-orcid":false,"given":"Reinhard","family":"Koch","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Kiel University, 24118 Kiel, Germany"}]}],"member":"1968","published-online":{"date-parts":[[2020,7,2]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1016\/j.tvjl.2016.09.005","article-title":"Early detection of health and welfare compromises through automated detection of behavioural changes in pigs","volume":"217","author":"Matthews","year":"2016","journal-title":"Vet. J."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.applanim.2008.08.001","article-title":"A review of environmental enrichment for pigs housed in intensive housing systems","volume":"116","author":"Day","year":"2009","journal-title":"Appl. Anim. Behav. Sci."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"125","DOI":"10.1016\/j.livsci.2016.07.009","article-title":"Influence of raw material on the occurrence of tail-biting in undocked pigs","volume":"191","author":"Veit","year":"2016","journal-title":"Livest. Sci."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"30","DOI":"10.1016\/j.applanim.2017.06.015","article-title":"Using automated image analysis in pig behavioural research: Assessment of the influence of enrichment substrate provision on lying behaviour","volume":"196","author":"Nasirahmadi","year":"2017","journal-title":"Appl. Anim. Behav. Sci."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"25","DOI":"10.1016\/j.livsci.2017.05.014","article-title":"Implementation of machine vision for detecting behaviour of cattle and pigs","volume":"202","author":"Nasirahmadi","year":"2017","journal-title":"Livest. Sci."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"164","DOI":"10.1016\/j.compag.2012.09.015","article-title":"The automatic monitoring of pigs water use by cameras","volume":"90","author":"Kashiha","year":"2013","journal-title":"Comput. Electron. Agric."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1016\/j.compag.2014.03.010","article-title":"Image feature extraction for classification of aggressive interactions among pigs","volume":"104","author":"Viazzi","year":"2014","journal-title":"Comput. Electron. Agric."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Lee, J., Jin, L., Park, D., and Chung, Y. (2016). Automatic Recognition of Aggressive Behavior in Pigs Using a Kinect Depth Sensor. Sensors, 16.","DOI":"10.3390\/s16050631"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1016\/j.compag.2016.04.022","article-title":"Automatic detection of mounting behaviours among pigs using image analysis","volume":"124","author":"Nasirahmadi","year":"2016","journal-title":"Comput. Electron. Agric."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1016\/j.compag.2015.10.023","article-title":"Using machine vision for investigation of changes in pig group lying patterns","volume":"119","author":"Nasirahmadi","year":"2015","journal-title":"Comput. Electron. Agric."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"132","DOI":"10.1016\/j.livsci.2013.12.011","article-title":"Automated video analysis of pig activity at pen level highly correlates to human observations of behavioural activities","volume":"160","author":"Ott","year":"2014","journal-title":"Livest. Sci."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1016\/j.livsci.2013.11.007","article-title":"Automatic monitoring of pig locomotion using image analysis","volume":"159","author":"Kashiha","year":"2014","journal-title":"Livest. Sci."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"187","DOI":"10.1007\/BF01215814","article-title":"Segmentation and tracking of piglets in images","volume":"8","author":"McFarlane","year":"1995","journal-title":"Mach. Vis. Appl."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"15","DOI":"10.1016\/j.compag.2007.09.006","article-title":"A real-time computer vision assessment and control of thermal comfort for group-housed pigs","volume":"62","author":"Shao","year":"2008","journal-title":"Comput. Electron. Agric."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1049\/iet-cvi.2017.0085","article-title":"Tracking of group-housed pigs using multi-ellipsoid expectation maximisation","volume":"12","author":"Mittek","year":"2018","journal-title":"IET Comput. Vis."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1016\/j.compag.2018.06.043","article-title":"Model-based detection of pigs in images under sub-optimal conditions","volume":"152","author":"Traulsen","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ju, M., Choi, Y., Seo, J., Sa, J., Lee, S., Chung, Y., and Park, D. (2018). A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring. Sensors, 18.","DOI":"10.3390\/s18061746"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zhang, L., Gray, H., Ye, X., Collins, L., and Allinson, N. (2018). Automatic individual pig detection and tracking in surveillance videos. arXiv.","DOI":"10.3390\/s19051188"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Nasirahmadi, A., Sturm, B., Edwards, S., Jeppsson, K.H., Olsson, A.C., M\u00fcller, S., and Hensel, O. (2019). Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors, 19.","DOI":"10.3390\/s19173738"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Psota, E., Mittek, M., P\u00e9rez, L., Schmidt, T., and Mote, B. (2019). Multi-Pig Part Detection and Association with a Fully-Convolutional Network. Sensors, 19.","DOI":"10.3390\/s19040852"},{"key":"ref_21","unstructured":"Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y. (2014). OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). SSD: Single Shot MultiBox Detector. arXiv.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv.","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"ref_25","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv."},{"key":"ref_26","unstructured":"Pinheiro, P.O., Collobert, R., and Dollar, P. (2015). Learning to Segment Object Candidates. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Pinheiro, P.O., Lin, T.Y., Collobert, R., and Doll\u00e0r, P. (2016). Learning to Refine Object Segments. arXiv.","DOI":"10.1007\/978-3-319-46448-0_5"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2018). Mask R-CNN. arXiv.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. arXiv.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_31","unstructured":"Chen, L.C., Papandreou, G., Schroff, F., and Adam, H. (2017). Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Li, Y., Qi, H., Dai, J., Ji, X., and Wei, Y. (2017). Fully Convolutional Instance-aware Semantic Segmentation. arXiv.","DOI":"10.1109\/CVPR.2017.472"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Romera-Paredes, B., and Torr, P.H.S. (2016). Recurrent Instance Segmentation. arXiv.","DOI":"10.1007\/978-3-319-46466-4_19"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Ren, M., and Zemel, R.S. (2017). End-to-End Instance Segmentation with Recurrent Attention. arXiv.","DOI":"10.1109\/CVPR.2017.39"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Uhrig, J., Cordts, M., Franke, U., and Brox, T. (2016). Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling. arXiv.","DOI":"10.1007\/978-3-319-45886-1_2"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"De Brabandere, B., Neven, D., and Van Gool, L. (2017). Semantic Instance Segmentation with a Discriminative Loss Function. arXiv.","DOI":"10.1109\/CVPRW.2017.66"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Kirillov, A., He, K., Girshick, R., Rother, C., and Doll\u00e1r, P. (2019). Panoptic Segmentation. arXiv.","DOI":"10.1109\/CVPR.2019.00963"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018). Path Aggregation Network for Instance Segmentation. arXiv.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Li, Q., Arnab, A., and Torr, P.H.S. (2019). Weakly- and Semi-Supervised Panoptic Segmentation. arXiv.","DOI":"10.1007\/978-3-030-01267-0_7"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Xiong, Y., Liao, R., Zhao, H., Hu, R., Bai, M., Yumer, E., and Urtasun, R. (2019). UPSNet: A Unified Panoptic Segmentation Network. arXiv.","DOI":"10.1109\/CVPR.2019.00902"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., and Murphy, K. (2017). Towards Accurate Multi-person Pose Estimation in the Wild. arXiv.","DOI":"10.1109\/CVPR.2017.395"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Papandreou, G., Zhu, T., Chen, L.C., Gidaris, S., Tompson, J., and Murphy, K. (2018). PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model. arXiv.","DOI":"10.1007\/978-3-030-01264-9_17"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Cao, Z., Simon, T., Wei, S.E., and Sheikh, Y. (2017). Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. arXiv.","DOI":"10.1109\/CVPR.2017.143"},{"key":"ref_44","unstructured":"Yakubovskiy, P. (2019). Segmentation Models, GitHub."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Chaurasia, A., and Culurciello, E. (2017, January 10\u201313). LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA.","DOI":"10.1109\/VCIP.2017.8305148"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017). Feature Pyramid Networks for Object Detection. arXiv.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017). Pyramid Scene Parsing Network. arXiv.","DOI":"10.1109\/CVPR.2017.660"},{"key":"ref_48","unstructured":"Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. (2016). Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. (2017). DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. arXiv.","DOI":"10.1109\/TPAMI.2017.2699184"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018). Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. arXiv.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_51","unstructured":"Li, H., Xiong, P., An, J., and Wang, L. (2018). Pyramid Attention Network for Semantic Segmentation. arXiv."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., and Lu, H. (2019). Dual Attention Network for Scene Segmentation. arXiv.","DOI":"10.1109\/CVPR.2019.00326"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., and Liu, W. (2018). CCNet: Criss-Cross Attention for Semantic Segmentation. arXiv.","DOI":"10.1109\/ICCV.2019.00069"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., and Terzopoulos, D. (2020). Image Segmentation Using Deep Learning: A Survey. arXiv.","DOI":"10.1109\/TPAMI.2021.3059968"},{"key":"ref_55","first-page":"160","article-title":"Density-Based Clustering Based on Hierarchical Density Estimates","volume":"Volume 7819","author":"Hutchison","year":"2013","journal-title":"Advances in Knowledge Discovery and Data Mining"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Fitzgibbon, A.W., and Fisher, R.B. (1996). A Buyer\u2019s Guide to Conic Fitting, University of Edinburgh, Department of Artificial Intelligence.","DOI":"10.5244\/C.9.51"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2016). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_59","unstructured":"Kingma, D.P., and Ba, J. (2017). Adam: A Method for Stochastic Optimization. arXiv."},{"key":"ref_60","unstructured":"Jung, A.B., Wada, K., Crall, J., Tanaka, S., Graving, J., Reinders, C., Yadav, S., Banerjee, J., Vecsei, G., and Kraft, A. (2020). Imgaug, GitHub."},{"key":"ref_61","unstructured":"Bradski, G. (2020, July 01). The OpenCV Library. Dr. Dobb\u2019s Journal of Software Tools. Available online: https:\/\/github.com\/opencv\/opencv\/wiki\/CiteOpenCV."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"205","DOI":"10.21105\/joss.00205","article-title":"hdbscan: Hierarchical density based clustering","volume":"2","author":"McInnes","year":"2017","journal-title":"J. Open Source Softw."},{"key":"ref_63","unstructured":"Tan, M., and Le, Q.V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/13\/3710\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,6,30]],"date-time":"2024-06-30T16:27:03Z","timestamp":1719764823000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/13\/3710"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,2]]},"references-count":63,"journal-issue":{"issue":"13","published-online":{"date-parts":[[2020,7]]}},"alternative-id":["s20133710"],"URL":"https:\/\/doi.org\/10.3390\/s20133710","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2020,7,2]]}}}