{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T19:05:02Z","timestamp":1732043102299},"reference-count":24,"publisher":"Institution of Engineering and Technology (IET)","issue":"2","license":[{"start":{"date-parts":[[2023,10,25]],"date-time":"2023-10-25T00:00:00Z","timestamp":1698192000000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/"}],"content-domain":{"domain":["ietresearch.onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["IET Image Processing"],"published-print":{"date-parts":[[2024,2]]},"abstract":"Abstract<\/jats:title>The accurate segmentation of breast tumours is important for the diagnosis and treatment of breast cancer. When using the classic U\u2010Net, Attention\u2010UNet, and UNet++ to segment the tumour, there are problems of oversegmentation, incorrect segmentation, and poor edge continuity. In this paper, an effective tumour segmentation method, EfficientUNet, is proposed. EfficientUNet adopts a step\u2010by\u2010step enhancement method, combining ResNet18, a channel attention mechanism and deep supervision. ResNet18, as the encoder of the whole network, solves the problem of gradient disappearanceand improves the feature extraction ability of the model. The channel attention module makes the model more accurate in tumour edge processing. The deep supervision technology accelerates the model training and provides the convergence direction for the model. In addition, it is found that when adjusting the size of the image, the method of image filling before clipping (or zooming) is more conducive to model learning than the direct interpolation method. And a comparative experiment wasperformed\u00a0on dataset B. Compared to U\u2010Net, Attention\u2010UNet and UNet++, EfficientUNet has the highest performance. Finally, the ablation experiment also indicated the effectiveness of each module in EfficientUNet.<\/jats:p>","DOI":"10.1049\/ipr2.12966","type":"journal-article","created":{"date-parts":[[2023,10,26]],"date-time":"2023-10-26T03:12:10Z","timestamp":1698289930000},"page":"523-534","update-policy":"http:\/\/dx.doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["EfficientUNet: An efficient solution for breast tumour segmentation in ultrasound images"],"prefix":"10.1049","volume":"18","author":[{"ORCID":"http:\/\/orcid.org\/0000-0003-0047-2692","authenticated-orcid":false,"given":"Guizeng","family":"You","sequence":"first","affiliation":[{"name":"Facaulty of Informtion and Technology Beijing University of Technology Beijing China"}]},{"given":"Xinwu","family":"Yang","sequence":"additional","affiliation":[{"name":"Facaulty of Informtion and Technology Beijing University of Technology Beijing China"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-3205-7979","authenticated-orcid":false,"given":"Xuanbo","family":"Lee","sequence":"additional","affiliation":[{"name":"Facaulty of Informtion and Technology Beijing University of Technology Beijing China"}]},{"given":"Kongqiang","family":"Zhu","sequence":"additional","affiliation":[{"name":"Facaulty of Informtion and Technology Beijing University of Technology Beijing China"}]}],"member":"265","published-online":{"date-parts":[[2023,10,25]]},"reference":[{"key":"e_1_2_9_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2021.106313"},{"key":"e_1_2_9_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2018.11.024"},{"key":"e_1_2_9_4_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10396-017-0811-8"},{"key":"e_1_2_9_5_1","doi-asserted-by":"crossref","unstructured":"Daoud M.I. Baba M.M. Awwad F. Al\u2010Najjar M. Tarawneh E.S.:Accurate segmentation of breast tumors in ultrasound images using a custom\u2010made active contour model and signal\u2010to\u2010noise ratio variations. In:Proceedings of the 8th International Conference on Signal Image Technology and Internet Based Systems\u2010 SITIS 2012 pp.137\u2013141.IEEE Piscataway NJ(2012)","DOI":"10.1109\/SITIS.2012.30"},{"key":"e_1_2_9_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2014.07.026"},{"key":"e_1_2_9_7_1","doi-asserted-by":"crossref","unstructured":"Long J. Shelhamer E. Darrell T.:Fully convolutional networks for semantic segmentation. In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp.3431\u20133440.IEEE Piscataway NJ(2015)","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"e_1_2_9_8_1","doi-asserted-by":"crossref","unstructured":"Ronneberger O. Fischer P. Brox T.:U\u2010net: convolutional networks for biomedical image segmentation. In:Medical Image Computing and Computer\u2010Assisted Intervention \u2013 MICCAI 2015 pp.234\u2013241.Springer Cham(2015)","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_2_9_9_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46976-8_19"},{"key":"e_1_2_9_10_1","article-title":"Attention U\u2010Net: learning where to look for the pancreas","author":"Oktay O.","year":"2018","journal-title":"arXiv:1804.03999"},{"key":"e_1_2_9_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"e_1_2_9_12_1","doi-asserted-by":"crossref","unstructured":"Zhou L. Zhang C. D\u2010LinkNet W.M.:LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. In:CVPR Workshops.182\u2013186.IEEE Piscataway NJ(2018)","DOI":"10.1109\/CVPRW.2018.00034"},{"key":"e_1_2_9_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2020.102027"},{"key":"e_1_2_9_14_1","first-page":"2672","article-title":"Generative adversarial nets","volume":"27","author":"Goodfellow I.J.","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_2_9_15_1","article-title":"Conditional generative adversarial nets","author":"Mirza M.","year":"2014","journal-title":"arXiv:1411.1784"},{"key":"e_1_2_9_16_1","article-title":"Semantic segmentation using adversarial networks. NIPS workshop on adversarial training","author":"Luc P.","year":"2016","journal-title":"arXiv:1611.08408"},{"key":"e_1_2_9_17_1","unstructured":"Kaiming H. et\u00a0al.:Deep residual learning for image recognition. In:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.137\u2013141.IEEE Piscataway NJ(2016)"},{"key":"e_1_2_9_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2017.2731873"},{"key":"e_1_2_9_19_1","article-title":"Deep residual learning for image recognition","author":"He K.","year":"2015","journal-title":"arXiv:1512.03385"},{"key":"e_1_2_9_20_1","article-title":"Training deeper convolutional networks with deep supervision","author":"Wang L.","year":"2015","journal-title":"arXiv:1505.02496"},{"key":"e_1_2_9_21_1","article-title":"Attention U\u2010Net: learning where to look for the pancreas","author":"Oktay O.","year":"2018","journal-title":"arXiv:1804.03999"},{"key":"e_1_2_9_22_1","unstructured":"Jie H. et\u00a0al.:Squeeze\u2010and\u2010excitation networks. In:IEEE Transactions on Pattern Analysis and Machine Intelligence pp.7132\u20137141.IEEE Piscataway NJ(2017)"},{"key":"e_1_2_9_23_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.953"},{"key":"e_1_2_9_24_1","doi-asserted-by":"crossref","unstructured":"Sudre C.H. Li W. Vercauteren T. Ourselin S. Cardoso M.J.:Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In:Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support pp.240\u2013248.Springer Cham(2017)","DOI":"10.1007\/978-3-319-67558-9_28"},{"key":"e_1_2_9_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00889-5_1"}],"container-title":["IET Image Processing"],"original-title":[],"language":"en","deposited":{"date-parts":[[2024,7,11]],"date-time":"2024-07-11T12:18:44Z","timestamp":1720700324000},"score":1,"resource":{"primary":{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/10.1049\/ipr2.12966"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,25]]},"references-count":24,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["10.1049\/ipr2.12966"],"URL":"https:\/\/doi.org\/10.1049\/ipr2.12966","archive":["Portico"],"relation":{},"ISSN":["1751-9659","1751-9667"],"issn-type":[{"value":"1751-9659","type":"print"},{"value":"1751-9667","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,25]]},"assertion":[{"value":"2021-08-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-26","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-25","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}