Authors:
Samya Amiri
1
;
Mohamed Ali Mahjoub
1
and
Islem Rekik
2
Affiliations:
1
University of Sousse, Tunisia
;
2
University of Dundee, United Kingdom
Keyword(s):
Structured Random Forest, Bayesian Network, Deep Cooperative Network, Autocontext Model, Brain Tumor Segmentation, MRIs.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Bayesian Networks
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Data Manipulation
;
Enterprise Information Systems
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Soft Computing
Abstract:
Brain cancer phenotyping and treatment is highly informed by radiomic analyses of medical images.
Specifically, the reliability of radiomics, which refers to extracting features from the tumor image intensity,
shape and texture, depends on the accuracy of the tumor boundary segmentation. Hence, developing fully-automated
brain tumor segmentation methods is highly desired for processing large imaging datasets. In this
work, we propose a cooperative learning framework for multi-label brain tumor segmentation, which
leverages on Structured Random Forest (SRF) and Bayesian Networks (BN). Basically, we embed both
strong SRF and BN classifiers into a multi-layer deep architecture, where they cooperate to better learn
tumor features for our multi-label classification task. The proposed SRF-BN cooperative learning integrates
two complementary merits of both classifiers. While, SRF exploits structural and contextual image
information to perform classification at the pixel-level, BN r
epresents the statistical dependencies between
image components at the superpixel-level. To further improve this SRF-BN cooperative learning, we
‘deepen’ this cooperation through proposing a multi-layer framework, wherein each layer, BN inputs the
original multi-modal MR images along with the probability maps generated by SRF. Through transfer
learning from SRF to BN, the performance of BN improves. In turn, in the next layer, SRF will also benefit
from the learning of BN through inputting the BN segmentation maps along with the original multimodal
images. With the exception of the first layer, both classifiers use the output segmentation maps resulting
from the previous layer, in the spirit of auto-context models. We evaluated our framework on 50 subjects
with multimodal MR images (FLAIR, T1, T1-c) to segment the whole tumor, its core and enhanced tumor.
Our segmentation results outperformed those of several comparison methods, including the independent
(non-cooperative) learning of SRF and BN.
(More)