Abstract
In the healthcare industry, developing an efficient diagnostic system to classify liver cancer cells is a very perplexing and arduous task. Recently, several studies demonstrate that deep ensemble classifiers can achieve better predictive accuracy than individual deep learning classifiers. The deep ensemble learners exploit more than one individual deep learner to achieve better classification results and improved generalization performance. When implementing an ensemble learning (multiple classifier) approach, the selection of the optimum learners from a crew is a critical issue and an effective learner assortment strategy is used to achieve better results. Several researchers have applied different approaches (e.g., rule-based algorithms, evolutionary computing, simulated annealing, etc.) to determine the optimal learners that can increase the performance of the diagnostic system. This work proposes a new classifier selection strategy to construct an ensemble called a contribution-based iterative base learner removal algorithm (CIBRA). The proposed algorithm finds out the best subset of individual learners by considering both prediction accuracy and diversity. The proposed CIBRA enables each base learner in a pool to have multiple chances to partake in an iteration of selection. CIBRA drops the classifiers only if they have no residual opportunities. This procedure is reiterated till no learner in the crew has any remaining possibility to partake in the selection round. In this study, we test various decision synthesis techniques to increase the performance of the ensemble classifier. To assess the performance of CIBRA, 8 standard cancer databases are exploited. Extensive simulation results divulge that two base classifiers are enough to classify liver cancer cells from hematoxylin and eosin (H&E) scans successfully. Based on the results obtained from this study, we construct an ensemble classifier using Dropout Extreme Learning Machine (DrpXLM) and Enhanced Convolutional Block Attention Modules (ECBAM) based residual network to classify liver cancer images. Besides, CIBRA generates better results when it operates with average probability as the decision synthesis technique.



Similar content being viewed by others
Data availability
Datasets for this research are retrieved from https://www.cancerimagingarchive.net/.
References
Rawla P, Sunkara T, Muralidharan P, Raj JP. Update in global trends and aetiology of hepatocellular carcinoma. Contemp Oncol. 2018;22(3):141–50.
Goodarzi E, Ghora F, Mosavi Jarrahi A, Adineh HA, Sohrabivafa M, Khazaei Z. Global incidence and mortality of liver cancers and its relationship with the human development index (HDI): an ecology study in 2018. WCRJ. 2019;6:e1255. https://doi.org/10.32113/wcrj_20194_1255.
Jha D, Gupta V, Ward L, Yang Z, Wolverton C, Foster I, Liao WK, Choudhary A, Agrawal A. Enabling deeper learning on big data for materials informatics applications”. Sci Rep. 2021;11(1):4244.
Zhang J, Wang P, Yan R, Gao RX. Deep learning for improved system remaining life prediction. Procedia CIRP. 2018;72:1033–8.
Panthong R, Srivihok A. Liver cancer classification model using hybrid feature selection based on class-dependent technique for the central region of Thailand. Information. 2019;10(6):187.
Wolpert DH, Macready WG. “No free lunch theorems for optimization.” https://doi.org/10.1109/4235.585893 (Accessed May 02, 2023).
Paz Sesmero M, Iglesias JA, Magán E, Ledezma A, Sanchis A. Impact of the learners diversity and combination method on the generation of heterogeneous classifier ensembles. Appl Soft Comput. 2021;111:107689.
Shephard RW, Färe R. The law of diminishing returns. Zeitschrift für Nationalökonomie. 1974;34(1–2):69–90.
Książek W, Gandor M, Pławiak P. Comparison of various approaches to combine logistic regression with genetic algorithms in survival prediction of hepatocellular carcinoma. Comput Biol Med. 2021;134:104431.
Cao Y, Geddes TA, Yang JYH, Yang PY. Ensemble deep learning in bioinformatics. Nat Mach Intell. 2020;2:500–8.
Cruz RMO, Sabourin R, Cavalcanti GDC. Dynamic classifier selection: Recent advances and perspectives. Inform Fusion. 2018;41:195–216.
Britto AS Jr, Sabourin R, Oliveira LES. Dynamic selection of classifiers—a comprehensive review. Pattern Recogn. 2014;47:3665–80.
Chandra A, Xin Y. DIVACE: diverse and accurate ensemble learning algorithm.” In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning,” Norwich, UK, 25–27. Springer: Berlin/Heidelberg. 2004. pp. 619–625.
Brun AL, Britto AS, Oliveira LS, Enembreck F, Sabourin R. A framework for dynamic classifier selection oriented by the classification problem difficulty. Pattern Recogn. 2018;76:175–90.
Junior LM, Nardini FM, Renso C, Trani R, Macedo JA. A novel approach to define the local region of dynamic selection techniques in imbalanced credit scoring problems. Expert Syst Appl. 2020;152: 113351.
Ekbal A, Saha S. A multiobjective simulated annealing approach for classifier ensemble: named entity recognition in Indian languages as case studies. Expert Syst Appl. 2011;38:14760–72.
Fletcher S, Verma B, Jan ZM, Zhang M. The optimized selection of base-classifiers for ensemble classification using a multi-objective genetic algorithm. Int Joint Conf Neural Netw (IJCNN). 2018;2018:1–8.
García-Gutiérrez J, Mateos-García D, Garcia M, Riquelme-Santos JC. An evolutionary-weighted majority voting and support vector machines applied to contextual classification of LiDAR and imagery data fusion. Neurocomputing. 2015;163:17–24.
Su K, Wu J, Gu D, Yang S, Deng S, Khakimova AK. An adaptive deep ensemble learning method for dynamic evolving diagnostic task scenarios. Diagnostics. 2021;11(12):2288.
Pérez-Gállego P, Castaño A, Quevedo JR, Coz JJD. Dynamic ensemble selection for quantification tasks. Inform Fusion. 2018;45:1–15.
Wolpert DH. Stacked generalization. Neural Netw. 1992;5:241–59.
Anand Babu R, Kannan S. Bat-inspired optimization for intrusion detection using an ensemble forecasting method. Int Autom Soft Comput. 2022;34(1):307–23.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition”. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2016. pp. 770–778
Huang G, Liu Z, Maaten LVD, Weinberger KQ. Densely Connected Convolutional Networks. Computer Vision and Pattern Recognition (CVPR). 2017. pp. 4700–4708.
Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. 31st American Association for Artificial Intelligence (AAAI). pp. 4278–4284.
Alom MZ, Yakopcic C, Taha TM, Asari VK. “Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Digit Imaging. 32:605–617.
Togaçar M, Özkurt KB, Ergen B, Cömert Z. A novel convolutional neural network model through histopathological images for the diagnosis of breast cancer”. Phys A Stat Mech Appl. 2020;545:123592.
Raza SEA, Cheung L, Shaban M, Graham S, Epstein D, Pelengaris S, Khan M, Rajpoot NM. Micro-net: a unified model for segmentation of various objects in microscopy images. Med Image Anal. 2019;52:160–73.
Aatresh AA, Alabhya K, Lal S, Kini J, Saxena PU. LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images. Int J Comput Assist Radiol Surg. 2021;16:1549–63.
Woo S, Park J, Lee J-Y, Kweon IS. CBAM: convolutional block attention module “Lecture Notes Comput Sci. Cham: Springer; 2018. https://doi.org/10.1007/978-3-030-01234-2_1.
Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging. 2013;26(6):1045–57.
Hong ZQ, Yang JY. Optimal discriminant plane for a small number of samples and design method of classifier on the plane. Pattern Recogn. 1991;24(4):317–24.
Michalski RS, Mozetic I, Hong J, Lavrac N. The Multi-Purpose Incremental Learning System AQ15 and its Testing Application to Three Medical Domains. In Proceedings of the Fifth National Conference on Artificial Intelligence, Philadelphia, PA: Morgan Kaufmann. 1986. pp. 1041–1045
Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data. 2018;5: 180161.
National Cancer Intelligence Network (NCIN). Cancer Outcomes and Services Dataset (COSD). http://www.ncin.org.uk/collecting_and_using_data/data_collection/cosd,2020. Accessed 16 Jan 2023.
Gutman D, Codella NCF, Celebi E, Helba B, Marchetti M, Mishra N, Halpern A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington, DC, USA. pp. 168–172.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of Interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the topical collection “Research Trends in Computational Intelligence” guest edited by Anshul Verma, Pradeepika Verma, Vivek Kumar Singh and S. Karthikeyan.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sabitha, P., Meeragandhi, G. Optimizing the Selection of Base Learners for Multiple Classifier System in Liver Cancer Identification Using Contribution-based Iterative Removal Algorithm. SN COMPUT. SCI. 4, 493 (2023). https://doi.org/10.1007/s42979-023-01936-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42979-023-01936-5