The use of magnetic resonance-ultrasound fusion targeted biopsy improves diagnosis of aggressive prostate cancer. Fusion of ultrasound & magnetic resonance images (MRI) requires accurate prostate segmentations. In this paper, we developed a 2.5 dimensional deep learning model, ProGNet, to segment the prostate on T2-weighted magnetic resonance imaging (MRI). ProGNet is an optimized U-Net model that weighs three adjacent slices in each MRI sequence to segment the prostate in a 2.5D context. We trained ProGNet on 529 cases where experts annotated the whole gland (WG) on axial T2-weighted MRI prior to targeted prostate biopsy. In 132 cases, experts also annotated the central gland (CG) on MRI. After five-fold cross-validation, we found that for WG segmentation, ProGNet had a mean Dice similarity coefficient (DSC) of 0.91±0.02, sensitivity of 0.89±0.03, specificity of 0.97±0.00, and an accuracy of 0.95±0.01. For CG segmentation, ProGNet achieved a mean DSC 0.86±0.01, sensitivity of 0.84±0.03, specificity of 0.99±0.01, and an accuracy of 0.96±0.01. We then tested the generalizability of the model on the 60-case NCI-ISBI 2013 challenge dataset and on a local, independent 61-case test set. We achieved DSCs of 0.81±0.02 and 0.72±0.02 for WG and CG segmentation on the NCI-ISBI 2013 challenge dataset, and 0.83±0.01 and 0.75±0.01 for WG and CG segmentation on the local dataset. Model performance was excellent and outperformed state-of-art U-Net and holistically-nested edge detector (HED) networks in all three datasets.
|