Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction
Next Article in Journal
Applying Federated Learning in Software-Defined Networks: A Survey
Previous Article in Journal
Sustainable and Robust Home Healthcare Logistics: A Response to the COVID-19 Pandemic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction

1
Department of Electronics and Communication Engineering, Raghu Institute of Technology (A), Visakhapatnam 531162, India
2
Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, India
3
Bio and Health Informatics Research Lab, Chandigarh University, Mohali 140413, India
4
Machine Learning and Data Science Research Lab, Chandigarh University, Mohali 140413, India
5
School of IT and Engineering, Melbourne Institute of Technology, Melbourne 3000, Australia
6
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
7
Department of Computer Science, College of Arts and Science, Prince Sattam Bin Abdul Aziz University, Wadi Ad-Dwasir 11991, Saudi Arabia
8
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(2), 194; https://doi.org/10.3390/sym14020194
Submission received: 13 November 2021 / Revised: 2 December 2021 / Accepted: 9 January 2022 / Published: 19 January 2022

Abstract

:
Fundus images have been established as an important factor in analyzing and recognizing many cardiovascular and ophthalmological diseases. Consequently, precise segmentation of blood using computer vision is vital in the recognition of ailments. Although clinicians have adopted computer-aided diagnostics (CAD) in day-to-day diagnosis, it is still quite difficult to conduct fully automated analysis based exclusively on information contained in fundus images. In fundus image applications, one of the methods for conducting an automatic analysis is to ascertain symmetry/asymmetry details from corresponding areas of the retina and investigate their association with positive clinical findings. In the field of diabetic retinopathy, matched filters have been shown to be an established technique for vessel extraction. However, there is reduced efficiency in matched filters due to noisy images. In this work, a joint model of a fast guided filter and a matched filter is suggested for enhancing abnormal retinal images containing low vessel contrasts. Extracting all information from an image correctly is one of the important factors in the process of image enhancement. A guided filter has an excellent property in edge-preserving, but still tends to suffer from halo artifacts near the edges. Fast guided filtering is a technique that subsamples the filtering input image and the guidance image and calculates the local linear coefficients for upsampling. In short, the proposed technique applies a fast guided filter and a matched filter for attaining improved performance measures for vessel extraction. The recommended technique was assessed on DRIVE and CHASE_DB1 datasets and achieved accuracies of 0.9613 and 0.960, respectively, both of which are higher than the accuracy of the original matched filter and other suggested vessel segmentation algorithms.

1. Introduction

As the retina is an extended version of the brain, it shares an embryological origin with the central nervous system. Therefore, automatic analysis of retinal images helps in understanding retinal and cardiovascular diseases and, in many cases, has significantly improved their clinical management. For the correct quantification of asymmetric or irregular patterns of vessel structures in retinal images, the symmetry or evenness of the vessel must be correctly recognized, either manually or automatically.
The automatic analysis and study of fundus images can identify important features related to alterations in blood vessel networks. The analysis of fundus images can be broadly categorized into two types of segmentation: OD segmentation and vessel segmentation. This analysis will help in detecting and dealing with several diseases, such as cardiovascular diseases [1], vessel occlusion [2], diabetic retinopathy [3], glaucoma [4,5], retinal disorders [6] and hypertension [7].
Therefore, vessel segmentation is a vital stage in the algorithms for detecting retinal structures such as the fovea and the optic disc and provides support in registering images clicked at various positions of the retina. These recorded images are used in the automatic observation of the progression of certain ailments. Understanding the structure of retinal vasculature will decrease false positive identifications in lesions. The identification of vasculature divergence and crossover points assists ophthalmologists in carrying out improved analysis.
Fundus images are mainly derived from one of two methods: use of a color fundus camera or fluorescein angiography (FA) [8,9,10]. The edges of the vessels are distinctly visible in FA images, compared to images taken via a color fundus camera. However, FA is more complicated, and most available algorithms presume that a fundus image is noise- and lesion-free. FA displays unsatisfactory performance, is subject to anomalous fundus images, and relies on various parameter configurations. Segmenting manually is a complicated and uneconomic. Therefore, an automatic algorithm that does not rely on parameter configuration, with less expense, is required for screening greater numbers of vessel anomalies.
To solve the disadvantages and shortcomings of manual segmentation, a novel technical approach to segmenting blood vessels is suggested, which can efficiently decrease doctors’ obligations and mitigate the negative effects of segmenting manually. Simultaneously, in comparison with other existing algorithms, the suggested technique is superior in segmentation accuracy and in determining connectivity in the extraction of blood vessels.
Recently, a convolutional neural network has been applied increasingly in the use of computer vision for biomedical image analysis [11,12,13,14,15,16,17,18,19,20,21,22,23,24]. However, the precise segmentation of images is difficult for low-contrast blood vessels in the retina and requires access to a huge database for training, while an unsupervised learning technique may be available in which such extensive training is not required.
In the literature, many image enhancement techniques are available. Among them is the guided filter used for image enhancement and edge-preserving. Many research studies have successfully implemented extensions of guided filter. Rajput and Gaul recommended a guided filter technique based on a local linear model [25]. He et al. suggested a novel explicit-image guided filter that is used for edge-preserving via a smoothing operator [26]. He and Sun then extended the original guided filter to a fast guided filter [27]. Hasegawa et al. suggested a reflected IR image that can be used as a guide image derived from a reflected image [28].
Chierchia et al. used the guided filter to improve resolution in cases of small forgeries [29]. Zhu and Yu extended the guided filter to a self-guided filter, designed as an effective and simple method for denoising a single noisy image without introducing an additional guidance image [30]. Lu et al. extended the guided filter to effective guided image-filtering by incorporating local variances for all pixels, and then computing an amplification factor for the detail layer [31]. Cheng et al. suggested a novel technique for optic disc (OD) analysis by computing the cup-to-disc ratio (CDR) values, reporting that the guided filter is capable of improving fundus images [32].
Various techniques have been suggested in the literature for the segmentation of blood vessels. These techniques can mostly be categorized according to mathematical morphology, machine learning, tracing, edging, and filtering.
A prior knowledge of vessel shape is required for morphological approaches; a filtering process must be performed to separate the vessel from its background segment. Zana and Klein integrated morphological filters with curvature assessments to detach vessels from retinal images [33]. Fraz et al. utilized bit planes and central lines in the extraction of vessels requiring a high processing time [34].
There are two types of machine learning-based approaches: supervised and unsupervised [35,36,37]. Supervised approaches use certain previous labeling information to determine the association of pixels with a vessel or a non-vessel. Vessel segmentation in unsupervised approaches is carried out without any awareness of previous labeling.
After the detection of an edge, trace-based approaches map out the global network of the blood vessels by outlining the center lines of vessels. These approaches depend on the detection result for an edge, and on overhead timing [38].
Detectors such as the Prewit, Canny, and Sobel operators are edge-based approaches. Tchindaa et al. proposed a classical edge-detection filter based on classical operators combined with artificial neural networks; however, the results obtained were not promising compared to other algorithms [39].
Many different types of filters, such as matched filters, Gabor filters, homomorphic filters, jJerman filters, and Coye filters, are used for vessel segmentation. Chaudhuri et al. recommended matched filters for segmenting blood vessels [40]. Dash et al. suggested an enhancement filter, using the Gabor filter to extract a blood vessel [41]. Dash et al. recommended illumination normalization, using a homomorphic filter for the extraction of blood vessels [42]. Cui et al. recommended an enhancement method using a Jerman filter to extract retinal vessels [43]. Ooi et al. recommended a vessel extraction approach, using a Canny edge detector [44]. Jiang et al. suggested a bilateral network with scale attention for the segmentation of vessels [45]. Dash et al. suggested a hybrid technique for the extraction of thin and thick vessels [46]. Kovacs and Fazekas recommended a new baseline for blood vessel segmentation [47]. Dash and Senapati improved the performance of the Coye filter by integrating it with discrete wavelet transform (DWT) for vessel extraction [48].
There have been many proposed extensions of matched filters. For example, Mudassar and Butt suggested four different techniques, one of which is blood vessel extraction using a modified matched filter [49]. Subudhi et al. improved the matched filter by combining it with a first-order derivative of Gaussian and particle swarm optimization (PSO) for vessel extraction [50]. Memari et al. reduced the noise of retinal images by using a morphological concept that combined a matched filter, a Gabor filter, and a Frangi filter, and used fuzzy C-means clustering and a level set method for segmenting the vessels [51]. AlSaeed et al. extended a matched filter to a multiscale matched filter by combining it with local features for the extraction of retinal vasculature [52]. Sreejini and Govindan implemented particle swarm optimization to determine optimal filter parameters for a multiscale Gaussian matched filter, to attain enhanced vessel segmentation [53]. Chakraborti et al. suggested a self-adaptive matched filter that combines a vesselness filter with high sensitivity and a matched filter with high specificity, designed via an orientation histogram [54]. Mohammad et al. suggested a matched filter with better filter parameters, using optimization to detect blood vessels [55]. L-Rawi and Karajeh suggested a genetic algorithm-based matched filter to optimize the parameters of the matched filter [56]. Hoover et al. recommended threshold probing by using the local and global vessel features of a matched filter for vessel segmentation [57]. Cinsdikici and Aydin considered an ant colony algorithm for the improvement of matched filter parameters for the extraction of the vessels [58]. Zhang et al. suggested a modified matched filter that employs local vessel cross-section analysis, using double-sided thresholding for vessel extraction [59].
A matched filter uses previous knowledge regarding vessel features, which can be approximated by a Gaussian function. However, the main drawback of a matched filter is that it responds to both vessel edges and non-vessel edges, such as the bright blob edges and lesions in fundus images.
There are two types of edge-preserving filters: a bilateral filter and a guided filter. Although both types of filters perform similar tasks, the bilateral filter has higher computational complexity. In contrast, the guided image filter does not use computations that have linear computational complexities. However, the original guided filter depends on a guidance image that slows the denoising process. Therefore, in this work, a fast guided filter is used to speed up the denoising process, without any visible degradation.
In analyzing the superiority and drawbacks of recent techniques in improving low-contrast and small retinal vasculature, this work proposes a new algorithm for segmentation based on a combined model of a fast guided filter and a matched filter, with C-means thresholding. This algorithm combines the two filters, using the superiority of each for performance improvement of retinal vasculature. Sizeable computations and comparisons demonstrate that the suggested approach for segmentation of retinal vasculature is better and more robust, in comparison to many existing approaches.
The important contributions of this study are the following:
  • Initially, an edge-preserving guided filter was employed on the fundus image to enhance and preserve the blood vessels.
  • One of the special properties of a retinal image is that the vessels generally have small curvatures that can be approached by piecewise linear segments. The matched filter concept identifies the piecewise linear segments of blood vessels. Therefore, in subsequent stages, a matched filter was employed on the guided enhanced images to detect small curvatures.
  • Mean-C thresholding was used in segmentation.
  • The experiment showed that the suggested method is simultaneously able to enhance and identify the curvatures of retinal images by properly preserving the edges, thereby achieving better results than those of the original matched filter.
Section 2 describes the materials and methodology. Section 3 provides and discusses the experimental results. Section 4 provides the conclusions based on the work.

2. Materials and Methods

2.1. Materials

2.1.1. Dataset

Publicly accessible datasets, digital retinal images of vessel extraction (DRIVE) and the Child Heart and Health Study in England (CHASE_DB1), were used for the verification of the suggested segmentation approach [60,61].
These datasets also provided information on equivalent vessel segmentations that were carried out manually by various professionals; in this paper, such manual segmentation information is regarded as the ground truth.
Forty color fundus images are available in the DRIVE (DRV) database, arranged in two sets: a training set and a testing set. As the proposed approach is an unsupervised method, the training set was not considered here. The testing set contains 20 color fundus images, corresponding FOV masks, and a manually segmented vessel structure (the ground truth). The resolutions of the retinal images were 768 × 584 pixels in size.
The CHASE_DB1 (CDB) dataset contains 28 color fundus images. Each image in the database was manually segmented to its equivalent vessel tree by two professionals. The resolutions of the retinal images were 1280 × 960 pixels.

2.2. Methods

Generally, thin vessels are unclear in grayscale retinal images. The major limitation in vessel segmentation is the low contrast in local intensity. Vessel width intensity varies significantly for different images and covers the boundaries of a vessel. Similarly, tiny vessels are mixed with Gaussian noise. Therefore, most of the previous suggested techniques in the literature have failed to identify vessels accurately. This shortcoming makes vessel segmentation a challenging task.
From the description of the fast guided filter and the sets of equations, it can be observed that a pixel from a high variance region will preserve its values, whereas a pixel from an even space will be smoothed by neighboring pixels. Thus, few fine structures in the areas that are nearly flat are smoothed away, with a frequency determined by an averaging method.
A matched filter is an effective and simple technique for vessel extraction. A matched filter will respond to both vessel and non-vessel edges. Conversely, a guided filter is an operator with smoothing and preserving qualities that acts better when closer to the edges. These qualities suggest that an integrated model of a fast guided filter and a matched filter will provide vessel enhancement and extract vessels with precision by using the advantages of each filter. Figure 1 displays the three stages of the recommended approach.

2.2.1. Matched Filter

To identify the blood vessels, a Gaussian matched filter can be used as the gray level profile of the vasculature as approached by a Gaussian shaped curve. The particulars of the matched filter can be found in [45]; a short explanation is set out below. The matched filter based on the Gaussian kernel function is described as follows:
P m ,   n = e x p m 2 2 σ 2                 n L 2 ,   m t .3 ,
where the matched filter is defined as
Q m ,   n = 1 2 π σ × e x p m 2 2 σ 2 m               n L 2 ,   m t .3 ,
B = t s t s 1 2 π σ × e x p m 2 2 σ 2 d m 2 t s
For smooth noise, L is the length of the vessel segment, σ describes the spreading of the intensity outline, and t is a constant fixed as 3.
For vessel identification, the kernel P(m, n) is rotated in various orientations with the maximum response of the filter bank. The use of 12 kernels with a rotation of 15° intervals is sufficient to identify the vessels with suitable accuracy. In a Gaussian curve with infinitely long signals, double-sided trails are truncated at u = ± 3 σ , with N denoted as
N = u ,   v ,   u 3 σ ,   v L 2    
The equivalent weights in the kernels i (i = 1…12, the number of kernels) are specified by
p i m ,   n = e x p u 2 2 σ 2           Z i N
The kernel mean value is calculated if A is a number of points in N as follows:
s i = Z i N p i m ,   n A
Therefore, the convolution mask is as follows:
p i m ,   n = p j m ,   n s i                   Z i N  

2.2.2. Fast Guided Filter

A guided filter may be considered as a bilateral filter, but with better performance near the edges. Theoretically, a guided filter can be linked with the Laplacian matrix. Thus, a guided filter has an additional advantage of using a structure to generate an improved image, in contrast to an ordinary smoothing operator. Furthermore, computational complexity is not dependent on the kernel size of the filter, due to the fast and non-approximate linear-time algorithm of the guided filter.
In the fields of computer, vision, and computer graphics, a guided filter is highly powerful and effective for numerous types of applications, such as elimination of noise, compression of HDR, enhancement, removal of haze, and joint upsampling.
A local linear model containing I as the guidance image, P as the filtering input image, and Q as the filtering output image, provides the main hypotheses of a guided filter. The linear transform is assumed as Q of P, with window mk centered at pixel k. The definition of the guided filter and its kernel are set out below.
Q i = c k I i + d k ,   i w k
where ( c k , d k ) are approximately constant linear coefficients in the w k square window using radius r with k as the index. With input image P, minimizing the reconstruction error using Equation (8) among P and Q yields the following result:
c k = 1 w i w i I i P i μ k P ¯ k σ k 2 + ε
d k = P ¯ k c k μ k
where μ k and σ k are the mean and variance of I in the window, and ε is the regularization parameter controlling the degree of smoothness.
After calculating ( c k , d k ) for all patches w k in the image, the filter output is calculated as set out below:
Q i = 1 w k : i w k c k P i + d k
Q i = c i ¯   I i + d i ¯
where c i ¯ = 1 w k w i c k and d i ¯ = 1 w k w i d k are the average coefficients of all windows centered at i.
Algorithm 1 illustrates the steps followed by guided filter. f m e a n represents the mean filter with extensive variability of O(N) time approaches.
However, the traditional guided filter depends heavily on the guidance image and fails to attain fast computation when performing image denoising. Therefore, a fast guided filter is recommended. A fast guided filter can accelerate from O(N) time to O N s 2 time for a subsampling ratio s. A fast guided filter is able to achieve a speed >10× without any visible degradation, in numerous applications.
Algorithm 1. Algorithm for Guided Filter
  Input parameters: A is the input filtering image, P is the guidance image, r is the radius, and ε is the regularization.
  Output parameter: Q is the filtering output.
1.  m e a n P = f m e a n P
               m e a n A = f m e a n A
               c o r r P = f m e a n P × P
               c o r r P A = f m e a n P × A
2.  v a r P = c o r r P m e a n P . m e a n P
            c o v P A = c o r r P A m e a n P . m e a n A
3.  x = c o v P A . / v a r P +
              y = m e a n A x . m e a n P
4.  m e a n x = f m e a n x
               m e a n y = f m e a n y
5.  Q = m e a n x . P + m e a n y

2.2.3. Preprocessing

To detect automatically the ocular diseases in fundus images, retinal fundus images was used. However, the most challenging task in analysing a fundus image is that the image quality is frequently corrupted, due to several reasons. For instance, fundus image quality is reduced because of the cataract in a human lens, just as a photograph’s quality is reduced by a cloudy camera lens. Different databases contain fundus images of various pathological situations; relying on them, the contents and characteristics of the images are altered. Consequently, the complete image quality must be improved throughout the stages of pre-processing. Therefore, the recommended technique is a unique mixture of a guided filter with a matched filter for enhancing the performance measures of the retinal vessels.
Initially, the fast guided filter was employed to improve the overall qualities of the image. As retinal vessels are more distinguishable in the green component, with good contrast as compared to the blue and red components [36], in the next step only the green channel was applied to the matched filter and considered in the overall operation of extracting a vessel.

2.2.4. Parameters of the Fast Guided Filter

As discussed in Section 2.2.2, parameters such as the radius of the window r, the regularization parameter ε , and the subsampling ratio s, were fixed as follows. The parameters were chosen empirically:
The regularization parameter was fixed as ε = 0.1 2 .
The subsampling ratio, s, of the fast guided filter was selected as s = r/4. Accordingly, the values of s and r were selected as follows.
For s = 1, r = 5, 6, 7, 8, 9, and 10; for s = 2, r = 6, 8, 10, 12, 14, and 16; for s = 3, r = 12, 15, 18, 21, 24 and, 27;
for s = 4, r = 12, 16, 20, 24, 28, and 32; and for s = 5, r = 15, 20, 25, 30, 35, and 40.

2.2.5. Parameters of the Matched Filter

The matched filter was discussed in Section 2.1.1, where it was mentioned that the important parameters, such as the spread of the intensity profile, represented as σ and L, indicate the orientation for the segment length for which the vessel is presumed to be fixed. Experimentally analysing the vessels of both normal and abnormal retinas, the value of L was determined. In the suggested technique, the values at L = 11 and σ = 3 established the best parameters that deliver the maximum response.

2.2.6. Mean Global-Based Hysteresis Thresholding

The new thresholding technique proposes combining mean-C thresholding, hysteresis thresholding, and Otsu thresholding, as discussed below in detail.
  • Mean-C thresholding
Segmentation of a vessel and the elimination of background are the two stages carried out by mean-C thresholding. Initially, a mean image was produced to eliminate the background through a convolution process. For the convolution using a mean filter, the improved image was used. This average filter smoothed the background for the images with low illumination. Then, subtraction of the filtered and improved image was carried out to generate an image of difference. In the next step, the binarization of the image was achieved by selecting a suitable threshold value c. The value of the parameters w and c were chosen analytically.
II.
Hysteresis thresholding
The global thresholding technique provides limited information about the neighborhood of the vessel; hence, it can result in poor detection of thin vessels, small disconnected vessels, and other distortions. To address these limitations, a local thresholding method can be used to consider the influence of neighboring vessel pixels. Heneghan et al. in [50] applied hysteresis thresholding to extract the vessels from the background. The hysteresis thresholding segments an image using two threshold values, Tlow and THigh. Pixel values higher than THigh were assigned as 1 and pixel values lower than Tlow were assigned as 0′; pixel values higher than Tlow, and having at least one neighborhood pixel great than THigh, were also set 1; other isolated pixels greater than Tlow were set to 0. Unlike mean-C thresholding, hysteresis thresholding considers the connectedness among neighboring pixels. As the vessels are tree-like structures, consideration of the neighborhood relationship improved the segmentation result. The two threshold values, Tlow and THigh, were selected experimentally, with the constraints Imin < Tlow < THigh < Imax., where Imin and Imax were the lowest and highest intensity values, respectively, of the enhanced image.
III.
Gray thresholding
The gray or global thresholding, or Otsu thresholding, method is a simple, robust, and straightforward procedure for finding an optimal threshold value for images that have a bimodal histogram. As a fundus image contains the vessels and the background, many researchers have implemented the Otsu threshold value to extract the vessel. That method finds a suitable threshold value Tglobal by minimizing the inter-class variance (11) between the foreground (C0) and the background (C1).
T g l o b a l = arg m i n σ B 2
The inter-class variance σ B 2 is defined as
σ B 2 = w 0 μ 0 μ T 2 + w 1 μ 1 μ T 2
where w 0 and w 1 are class probabilities, defined as
w 0 = i = 1 T g l o b a l i   p i             and       w 1 = i = T g l o b a l + 1 L i   p i
where L is the maximum intensity value.
The class variances μ 0   and   μ 1 are defined as follows.
μ 0 = i = 1 T g l o b a l i   p i ,   μ 1 = i = T g l o b a l + 1 L i   p i   and   μ T = i = 1 L i   p i
IV.
Mean Global-based on Hysteresis (MGBH) thresholding
The mean global-based on hysteresis (MGBH) thresholding process binarizes the image by setting the vessel pixels to 1 and the background pixels to 0. The proposed method first eliminated the background and then estimated the vessel. Although the gray values were unevenly distributed in an image, the pixels (which are lying in a uniform neighborhood) could still be considered as background. A mean filter of window size “W” was convolved with the optic disk segmented image and the filtered output was subtracted from the latter to generate a difference image. Therefore, the background pixels’ intensity values in the difference image were close to 0 (i.e., the background appeared darker, as compared to the vessels). Next, to extract the vessel, the two threshold values were denoted as Tglobal and Tlow. Tglobal is the gray threshold value, which is calculated using the Otsu method as discussed above; Tlow was chosen imperially. To estimate the vessel, each pixel intensity of the difference image was compared with Tglobal and Tlow. If the pixels’ intensity value was higher than Tglobal, it was set to 1; if the pixel intensity was lower than Tlow, it was replaced by 0. If the pixel intensity was between Tlow and Tglobal, and if there was at least one pixel in its 8-neighborhood greater than Tglobal, then that pixel was replaced by 1, or otherwise set to 0.
The algorithm of the proposed thresholding method is described below:
  • An averaging filter of window size w*w was applied on the fast guided and matched filter transformed image
  • By subtraction of the average filtered image from the enhanced image, the difference image was generated.
  • Two threshold values, Tglobal and Tlow, were generated.
  • Tglobal was the gray threshold value calculated by the Otsu method and T l o w was experimentally selected
  • Each pixel value of the difference image was compared with Tglobal and Tlow
    • If the pixel gray value was higher than Tglobal, it was replaced as 1;
    • If the pixel gray value was lower than Tlow, it was replaced as 0;
    • If the pixel value was between Tlow and Tglobal and had at least one pixel in its 8-neighborhood that was greater than Tglobal then it was also replaced by 1;
    • Otherwise, the pixel value was replaced by ′.
Algorithm 2 represents the steps followed by suggested MGBH thresholding technique.
Algorithm 2. Algorithm of MGBH thresholding
Input: Optic disc removed image (Iod)
Output: Vessel segmented image (Iseg)
Parameters: H: Horizontal dimension of image (Iod)
      V: Vertical dimension of image (Iod)
      Tlow: 0.013
      Thigh: 0.155
Start
i = 1, j = i
for i < H do
for j < V do
if Iod(i, j) < Tlow
 Iseg(i, j)←0;
or else if Iod > Thigh
 Iseg(i, j)←1;
 or else if Iod > Tlow & &
    Thigh ∈ NB8 (Iod (i, j))
    Iseg(i, j)←1;
or else
    Iseg(i, j)←0;
    j←j + 1;
end for
    i←i + 1;
  end for

2.2.7. Post-Processing

In the post-processing step, morphological cleaning operations were carried out, such as the area opening being used to remove the non-vessel. The area opening used the 8-connectivity rule and was independent of the structures. Thus, in this step, structuring elements were not used. Area opening removed all connected components in the background that had less than the fixed pixel values of the black and white image. All the images were applied via the same operation.
Figure 2 and Figure 3 illustrate the enhanced images obtained for the DRV and CDB datasets by using the fast guided filter with different values of the subsampling ratio and radius, respectively. Figure 4a and Figure 5a represent the green channel images of the DRV and CDB datasets, respectively. Figure 4b and Figure 5b represent the vessel extracted via the matched filter for DRV and CDB databases, respectively. Figure 4c–g and Figure 5c–g display the vessel extracted via the integrated technique of fast guided filter and the matched filter for DRV and CDB databases, respectively. From the figures, it is clearly observed that the suggested integrated model delivered better enhanced images than did the traditional matched filter.

3. Results and Discussion

As in most of the previous studies, in this work the ground truth segmentation images given by the first observer in the CDB and the test set of the DRV databases were considered for comparison with the resulting segmented images of the suggested approach. As FOV masks were unavailable in the CDB dataset, they were generated automatically so that the suggested technique was well-matched with other databases. The FOV masks provided in the datasets were not used in the suggested work. The performance of the traditional matched filter on the DRV dataset is given in Table 1. The performance of the traditional matched filter on the CDB dataset is displayed in Table 2. Table 3 shows the performance of the suggested integrated approach on the DRV database. Table 4 displays the achievement of the suggested integrated scheme on the CDB database.
The selected parameters for the suggested algorithm of the matched filter have been discussed in Section 2.2.3. In addition, the fixed parameters for the algorithm of the fast guided filter have been mentioned in Section 2.2.3. As it was difficult to allocate the best values to these parameters of the fast guided filter, the values were chosen experimentally based on knowledge of the types of data. The best parameters of the fast guided filter were decided according to the highest results achieved for the performance measure.

3.1. Performance Measures

For quantitative evaluation and comparison with other segmentation techniques, the performance measures of the suggested algorithm included sensitivity (Sn), detection accuracy (Ac), and specificity (Spc). As most of the previous research considered these three parameters, the state-of-the-art method could be easily compared by selecting these three parameters. There were several other performance metrics, such as the Jaccard Index, the dice coefficient, and the Matthews correlation coefficient. Performance measures were evaluated on each of the DRV and CDB databases. For each image, the evaluation matrices were computed and the average of all images was calculated.
For the DRIVE database, the original matched filter had a sensitivity of 0.6025, an accuracy of 0.9301, and a specificity of 0.9537, as shown in Table 1. The improved performance matrices for the suggested technique attained a sensitivity of 0.7043, an accuracy of 0.9613, and a specificity of 0.9890 on the DRIVE database, as shown in Table 2. These results demonstrated that the increments in the performance matrices for the proposed approach are better than those of the traditional matched filter.
For the CDB dataset, the original matched filter achieved a sensitivity of 0.6156, an accuracy of 0.9296, and a specificity of 0.9426, as shown in Table 3. The improved performance matrices for the recommended method accomplished a sensitivity of 0.7153, an accuracy of 0.9605, and a specificity of 0.9816 on the CDB database, as shown in Table 4. The results for the CDB database established that the rises in the performance matrices for the proposed method are improved when compared with those of the traditional matched filter.
Different parameters for the suggested thresholding techniques are discussed below. The value of THigh was fixed at 0.155 for both datasets. The value of Tlow must be less than THigh. For the selection of c and w, a series of values varied from 0.02 to 0.04 and window size varied from 11 to 15. Tlow varied from 0.010 to 0.037, with an increment of 0.003. Thus, an experiment was conducted to determine the values of Tlow, c, and w. For each value, the accuracy was computed. Table 5 represents the rough calculated values for the different parameters. The parameters were fixed at Tlow = 0.013, c = 0.03, and w = 13.

3.2. Comparison with State-of-Theart Methods

Compared with previous approaches, the suggested approach based on a combined matched filter and fast guided filter was more effective and delivered better outputs. In the normal gradient vector field, a difference appeared among the vessel pixels and the pathological tissues. Accordingly, the pixels that gave a brighter impression than that of their neighbor were treated as non-vessel pixels, and better performance was observed in specificity but not in sensitivity. Table 6 illustrates the comparison of the suggested approach with other state-of-art-of-art methods. Because the objective of the recommended approach was to improve the original matched filter, most of the comparisons were carried out with the available existing matched filters, as indicated in the literature. It is clear that the recommended approach delivers higher performance matrices than the state-of-the-art methods.
Figure 6a, Figure 7a and Figure 8a represent the original images of retinas 2 and 4 from the DRV database and retina 1 from the CDB database. Figure 6b, Figure 7b and Figure 8b represent the ground truth images of retinas 2 and 4 from the DRIVE database and retina 1 from the CHASE_DB1 database. Figure 6c, Figure 7c and Figure 8c show the vessel extracted image using the original matched filter of retinas 2 and 4 from the DRV database and retina 1 from the CDB database. Figure 6d–h, Figure 7d–h and Figure 8d–h display the vessel extracted image using the proposed method for various values of the fast guided filter for retinas 2 and 4 from the DRV database and retina 1 from the CDB database. From the figures, it is observed that the suggested technique is capable of identifying more thin vessels, with fewer false detections, when compared with the performance of the original matched filter.

4. Conclusions

This paper has proposed a novel extension to the matched filter technique by integrating a fast guided filter and a matched filter for the enhancement of fundus images. A new thresholding technique is also proposed, which combines hysteresis thresholding, mean-C thresholding, and Otsu thresholding for vessel extraction, assessed on DRV and CDB databases. The results are promising and compare favorably when compared with results achieved using similar existing methods. The results illustrate that the recommended approach can identify neovascular nets and remove various non-vessel edges produced by brighter lesions, as required for screening of PDR, as the long-curved edges of brighter lesions are inclined towards incorrect segmentation as neovascular nets.
In the future, a deep learning technique will be combined with different enhancement techniques to locate the thin vessels more prominently and to improve the robustness of the algorithm.

Author Contributions

Conceptualization, S.D., S.V., K. and S.B.; data curation, K., S.B., M.W. and J.S.; formal analysis, S.D., J.S., S.V., M.F.I. and M.W.; funding acquisition, M.F.I. and M.W.; investigation, S.D., S.V., K. and S.B.; methodology, S.D., S.V., K., S.B., M.W., J.S. and M.F.I.; project administration, M.F.I. and M.W.; resources, M.F.I., M.W. and J.S.; software, S.D., S.V., K. and S.B.; supervision, J.S., S.V., M.F.I. and M.W.; validation, S.D., S.V., K., S.B., J.S., M.F.I. and M.W. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge contribution to this project from the Rector of the Silesian University of Technology under a proquality grant no. 09/010/RGJ22/0068.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analysed in this study.

Acknowledgments

Jana Shafi would like to thank the Deanship of Scientific Research, Prince Sattam Bin Abdul Aziz University, for supporting this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanski, J.J. Clinical Ophthalmology: A Systematic Approach; Butterworth Heinemann: Boston, MA, USA, 2007; Volume 629. [Google Scholar]
  2. Leung, H.; Wang, J.J.; Rochtchina, E.; Wong, T.Y.; Klein, R.; Mitchell, P. Impact of current and past blood pressure on retinal arteriolar diameter in an older population. J. Hypertens. 2004, 22, 1543–1549. [Google Scholar] [CrossRef] [PubMed]
  3. Ciulla, T.; Amador, A.; Zinman, B. Diabetic retinopathy and diabetic macular edema pathophysiology, screening, and novel therapies. Diabetes Care 2003, 26, 2653–2664. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Nath, M.K.; Dandapat, S. Differential entropy in wavelet sub-band for assessment of glaucoma. Int. J. Imaging Syst. Techol. 2012, 22, 161–165. [Google Scholar] [CrossRef]
  5. Nath, M.K.; Dandapat, S. Multiscale ICA for fundus image analysis. Int. J. Imaging Syst. Technol. 2013, 23, 327–337. [Google Scholar] [CrossRef]
  6. Nath, M.K.; Dandapat, S.; Barna, C. Automatic detection of blood vessels and evaluation of retinal disorder from colour fundus images. J. Intell. Fuzzy Syst. 2020, 38, 1–12. [Google Scholar]
  7. Mitchell, P.; Leung, H.; Wang, J.J.; Rochtchina, E.; Lee, A.J.; Wong, T.Y.; Klein, R. Retinal vessel diameter and open-angle glaucoma: The blue mountains eye study. Ophthalmology 2005, 112, 245–250. [Google Scholar] [CrossRef] [PubMed]
  8. Giancardo, L.; Meriaudeau, F.; Karnowski, T.P.; Li, Y.; Tobin, K.W.; Chaum, E. Automatic retina exudates segmentation without a manually labelled training set. In Proceedings of the 2011 IEEE International Symposium Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 1396–1400. [Google Scholar] [CrossRef] [Green Version]
  9. Sreejini, K.S.; Govindan, V.K. Severity grading of DME from retina images: A combination of PSO and FCM with bayes classifier. Int. J. Comput. Appl. 2013, 81, 11–17. [Google Scholar]
  10. Sreejini, K.S.; Govindan, V.K. Automatic grading of severity of diabetic macular edema using color fundus images. In Proceedings of the Third International Conference on Advances in Computing and Communications (ICACC), Cochin, India, 12 December 2013; pp. 177–180. [Google Scholar]
  11. Ghosh, G.; Kavita; Anand, D.; Verma, S.; Rawat, D.B.; Shafi, J.; Marszałek, Z.; Woźniak, M. Secure Surveillance Systems Using Partial-Regeneration-Based Non-Dominated Optimization and 5D-Chaotic Map. Symmetry 2021, 13, 1447. [Google Scholar] [CrossRef]
  12. Sood, M.; Verma, S.; Panchal, V.K.; Kavita. Optimal Path Planning using Swarm Intelligence based Hybrid Techniques. J. Comput. Theor. Nanosci. (JCTN) 2019, 16, 3717–3727. [Google Scholar] [CrossRef]
  13. Li, Z.; Verma, S.; Jin, M. Power Allocation in Massive MIMO-HWSN Based on the Water-Filling Algorithm. Wirel. Commun. Mob. Comp. 2021, 2021, 8719066. [Google Scholar] [CrossRef]
  14. Chudhery, M.A.Z.; Safdar, S.; Huo, J.; Rehman, H.-U.; Rafique, R. Proposing and Empirically Investigating a Mobile-Based Outpatient Healthcare Service Delivery Framework Using Stimulus–Organism–Response Theory. IEEE Trans. Eng Mang. 2021, 1–14. [Google Scholar] [CrossRef]
  15. Panigrahi, R.; Borah, S.; Bhoi, A.K.; Ijaz, M.F.; Pramanik, M.; Jhaveri, R.H.; Chowdhary, C.L. Performance Assessment of Supervised Classifiers for Designing Intrusion Detection Systems: A Comprehensive Review and Recommendations for Future Research. Mathematics 2021, 9, 690. [Google Scholar] [CrossRef]
  16. Rani, G.; Oa, M.G.; Dhaka, V.S.; Pradhan, N.; Verma, S.; Joel, J.P.C. Applying deep learning-based multi-modal for detection of coronavirus. Multi. Syst. 2021, 18, 1–24. [Google Scholar] [CrossRef]
  17. Sharma, T.; Verma, S. Kavita. Prediction of heart disease ussing Cleveland dataset: A machine learning approach. Int. J. Rec. Res. Asp. 2017, 4, 17–21. [Google Scholar]
  18. Sharma, T.; Srinivasu, P.N.; Ahmed, S.; Alhumam, A.; Kumar, A.B.; Ijaz, M.F. An AW-HARIS based automated segmentation of human liver using CT images. CMC-Comp. Mater. Contin. 2021, 69, 3303–3319. [Google Scholar]
  19. Singh, A.P.; Pradhan, N.R.; Luhach, A.K.; Agnihotri, S.; Jhanjhi, N.Z.; Verma, S.; Ghosh, U.; Roy, D.S. A novel patientcentric architectural frame work for blockchain-enabled health care applications. IEEE Trans. Ind. Inform. 2021, 17, 5779–5789. [Google Scholar] [CrossRef]
  20. Li, W.; Chai, Y.; Khan, F.; Jan, S.R.U.; Verma, S.; Menon, V.G.; Li, X.A. Comprehensive survey on machine learning-based big data analytics for IOT-enabled healthcare system. Mob. Netw. Appl. 2021, 26, 234–252. [Google Scholar] [CrossRef]
  21. Ijaz, M.F.; Attique, M.; Son, Y. Data-Driven cervical cancer prediction model with outlier detection and over sampling methods. Sensors 2020, 20, 2809. [Google Scholar] [CrossRef] [PubMed]
  22. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef] [PubMed]
  23. Mandal, M.; Singh, P.K.; Ijaz, M.F.; Shafi, J.; Sarkar, R. A Tri-stage Wrapper-Filter feature selection framework for disease classification. Sensors 2021, 21, 5571. [Google Scholar] [CrossRef] [PubMed]
  24. Ijaz, M.F.; Alfian, G.; Syafrudin, M.; Rhee, J. Hybrid Prediction Model for Type 2 diabetes and hypertension using DBSCAN based outlier detection. Synthetic Minority Over Sampling Technique (SMOTE), and Random Forest. Appl. Sci. 2018, 8, 1325. [Google Scholar] [CrossRef] [Green Version]
  25. Rajput, N.; Gaul, D.S. Guided filter technique: Various aspects in image processing. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 28, 1768–1783. [Google Scholar]
  26. He, K.; Sun, J.; Tang, X. A technique for guided image filstering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1–13. [Google Scholar] [CrossRef]
  27. He, K.; Sun, J. Fast guided filter. arXiv 2015, arXiv:1505.00996. [Google Scholar]
  28. Hasegawa, T.; Tomizawa, R.; Yamauchi, Y.; Yamashita, T.; Fujiyoshi, H. Guided filtering using reflected IR image for improving quality of depth image. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP 2016), Rome, Italy, 27–29 February 2016; pp. 33–39. [Google Scholar] [CrossRef] [Green Version]
  29. Chierchia, G.; Cozzolino, D.; Poggi, G.; Sansone, C.; Verdoliva, L. Guided filtering for PRNU-based localization of small-size image forgeries. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014), Florence, Italy, 4–9 May 2014; pp. 6231–6235. [Google Scholar]
  30. Zhu, S.; Yu, Z. Self-guided filter for image denoising. IET Imaging Process. 2020, 14, 2561–2566. [Google Scholar] [CrossRef]
  31. Lu, Z.; Long, B.; Li, K.; Lu, F. Effective guided image filtering for contrast enhancement. IEEE Signal Proc. Lett. 2018, 25, 1585–1589. [Google Scholar] [CrossRef]
  32. Cheng, J.; Li, Z.; Gu, Z.; Fu, H.; Wong, D.W.K.; Li, J. Structure-preserving guided retinal image filtering and its application for optic disc analysis. IEEE Trans. Med. Imaging 2018, 10, 2536–2546. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Zana, F.; Klein, J.-C. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. Imaging Process. IEEE Trans. 2001, 10, 1010–1019. [Google Scholar] [CrossRef] [Green Version]
  34. Fraz, M.M.; Barman, S.A.; Remagnino, P.; Hoppe, A.; Basit, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, G.C. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Prog. Biomed. 2012, 108, 600–616. [Google Scholar] [CrossRef]
  35. Mahapatra, S.; Jena, U.; Dash, S. Curvelet Transform and ISODATA thresholding for retinal vessel extraction. In Proceedings of International Conference on Communication, Circuits, and Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 195–203. [Google Scholar] [CrossRef]
  36. Kande, G.B.; Venkata, P.S.; Savithri, S.T. Unsupervised fuzzy based vessel segmentation in pathological digital fundus images. J. Med. Syst. 2010, 4, 849–858. [Google Scholar] [CrossRef]
  37. Vlachos, M.; Dermatas, E. Multi-scale retina vessel segmentation using line tracking. Comp. Med. Imaging Graph. 2010, 34, 213–227. [Google Scholar] [CrossRef]
  38. Tchindaa, B.S.; Tchiotsopa, D.; Noubomb, M.; Dorrc, V.L.; Wolf, D. Retinal blood vessels segmentation using classical edge detection filters and the neural network. Inform. Med. Unlocked 2021, 23, 100521. [Google Scholar] [CrossRef]
  39. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Dash, J.; Parida, P.; Bhoi, N. Retinal vessel extraction from fundus images using enhancement filtering and clustering. Electron. Comp. Vision Imaging Anal. 2020, 19, 38–52. [Google Scholar]
  41. Dash, S.; Senapati, M.R.; Sahu, P.K.; Chowdary, P.S.R. Illumination normalized based technique for retinal blood vessel segmentation. Int. J. Imaging Syst. Technol. 2020, 31, 1–13. [Google Scholar] [CrossRef]
  42. Cui, H.; Xia, Y.; Zhang, Y. 2D and 3D vascular structures enhancement via improved vesselness filter and vessel enhancing diffusion. IEEE Access. 2019, 7, 123969–123979. [Google Scholar] [CrossRef]
  43. Dash, S.; Senapati, M.R. Enhancing detection of retinal blood vessels by combined approach of DWT, Tyler Coye and Gamma correction. Biomed. Signal Process. Control 2020, 57, 101740. [Google Scholar] [CrossRef]
  44. Ooi, A.Z.H.; Embong, Z.; Hamid, A.I.A.; Zainon, R.; Wang, S.L.; Ng, T.F.; Hamzah, R.A.; Teoh, S.S.; Ibrahim, H. Interactive blood vessel segmentation from retinal fundus image based on canny edge detector. Sensors 2021, 21, 6380. [Google Scholar] [CrossRef]
  45. Jiang, Y.; Yao, H.; Ma, Z.; Zhang, J. Bi-SANet-Bilateral network with scae attention for retinal vessel segmentation. Symmetry 2021, 13, 1820. [Google Scholar] [CrossRef]
  46. Dash, S.; Verma, S.; Khan, M.; Wozniak, M.; Shafi, J.; Ijaz, M.F. A hybrid method to enhance thick and thin vessels for blood vessel segmentation. Diagnostics 2021, 11, 2017. [Google Scholar] [CrossRef]
  47. Kovacs, G.; Fazekas, A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med. Imaging Anal. 2021, 75, 102300. [Google Scholar] [CrossRef] [PubMed]
  48. Mudassar, A.A.; Butt, S. Extraction of blood vessels in retinal image using four different techniques. J. Med. Image 2013, 2013, 408120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Subudhi, A.; Pattnaik, S.; Sabut, S. Blood vessel extraction of diabetic retinopathy using optimized enhanced images and matched filter. J. Med. Imaging 2016, 3, 044003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Memari, N.; Ramli, A.R.; Saripan, M.I.B.; Mashohor, S.; Moghbel, M. Retinal blood vessel segmentation by using matched filtering and fuzzy C-means clustering with integrated level set method. J. Med. Biolog. Eng. 2019, 39, 713–731. [Google Scholar] [CrossRef] [Green Version]
  51. AlSaeed, D.; Bouridane, A.; Jafri, R.; Al-Ghreimil, N.; Al-Baity, H.H.; Alhudhud, G. A novel blood vessel extraction using multiscale matched filters with local features and adaptive thresholding. Biosci. BioTechol. Res. Commun. 2020, 13, 1104–1113. [Google Scholar] [CrossRef]
  52. Sreejini, K.S.; Govindan, V.K. Improved multiscale matched filter for retinal vessel segmentation using PSO algorithm. Egypt. Inform. J. 2015, 16, 253–260. [Google Scholar] [CrossRef] [Green Version]
  53. Chakraborti, T.; Jha, D.K.; Chowdhury, A.S.; Jiang, X. A self-adaptive matched filter for retinal blood vessel detection. Mach. Vision Appl. 2014, 26, 1–14. [Google Scholar] [CrossRef]
  54. Mohammad, A.R.; Munib, Q.; Muhammad, A. An improved matched filter for blood vessel detection of digital retinal images. Comp. Biol. Med. 2007, 37, 262–267. [Google Scholar]
  55. L-Rawi, M.; Karajeh, H. Genetic algorithm matched filter optimization for automated detection of blood vessels from digital retinal images. Comp. Methods Programs Biomed. 2007, 87, 248–253. [Google Scholar] [CrossRef]
  56. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  57. Cinsdikici, M.G.; Aydin, D. Detection of blood vessels in ophthalmoscope images using mf/ant (matched filter/ant colony) algorithm. Comp. Methods Programs Biomed. 2009, 96, 85–95. [Google Scholar] [CrossRef] [PubMed]
  58. Zhang, L.; Li, Q.; You, J.; Zhang, D. A modified matched filter with double-sided thresholding for screening proliferative diabetic retinopathy. IEEE Trans. Inform. Techol. Biomed. 2009, 13, 528–534. [Google Scholar] [CrossRef] [PubMed]
  59. Heneghan, C.; Flynn, J.; O’Keefe, M.; Cahill, M. Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Med. Image Anal. 2002, 6, 407–429. [Google Scholar] [CrossRef]
  60. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  61. Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring retinal vessel tortuosity in 10-year-old children: Validation of the computer-assisted image analysis of the retina (CAIAR) program. Ophthalmol. Vis. Sci. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Block diagram of the recommended approach.
Figure 1. Block diagram of the recommended approach.
Symmetry 14 00194 g001
Figure 2. Samples of the enhanced images generated for retina 2 from the DRV dataset by applying the fast guided filter with different values: (a) original image, (b) guided image at s = 1, r = 6, (c) guided image at s = 2, r = 10, (d) guided image at s = 3, r = 18, (e) guided image at s = 4, r = 28, (f) guided image at s = 5, r = 25.
Figure 2. Samples of the enhanced images generated for retina 2 from the DRV dataset by applying the fast guided filter with different values: (a) original image, (b) guided image at s = 1, r = 6, (c) guided image at s = 2, r = 10, (d) guided image at s = 3, r = 18, (e) guided image at s = 4, r = 28, (f) guided image at s = 5, r = 25.
Symmetry 14 00194 g002
Figure 3. Samples of the enhanced images generated for retina 1 from the CDB dataset by applying the fast guided filter with different values: (a) original image, (b) guided image at s = 1, r = 6, (c) guided image at s = 2, r = 10, (d) guided image at s = 3, r = 18, (e) guided image at s = 4, r = 28, (f) guided image at s = 5, r = 25.
Figure 3. Samples of the enhanced images generated for retina 1 from the CDB dataset by applying the fast guided filter with different values: (a) original image, (b) guided image at s = 1, r = 6, (c) guided image at s = 2, r = 10, (d) guided image at s = 3, r = 18, (e) guided image at s = 4, r = 28, (f) guided image at s = 5, r = 25.
Symmetry 14 00194 g003
Figure 4. Samples of the effect of the integrated model of a fast guided filter and a matched filter for retina 2 from the DRV dataset with different values: (a) green channel image, (b) vessel extracted from the traditional matched filter, (cg) vessel extracted from the fast guided filter at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Figure 4. Samples of the effect of the integrated model of a fast guided filter and a matched filter for retina 2 from the DRV dataset with different values: (a) green channel image, (b) vessel extracted from the traditional matched filter, (cg) vessel extracted from the fast guided filter at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Symmetry 14 00194 g004
Figure 5. Examples of the effect of the integrated model of a fast guided filter and a matched filter for retina 1 from the CDB dataset with different values: (a) green channel image, (b) vessel extracted from the traditional matched filter, (cg) vessel extracted from the fast guided filter at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Figure 5. Examples of the effect of the integrated model of a fast guided filter and a matched filter for retina 1 from the CDB dataset with different values: (a) green channel image, (b) vessel extracted from the traditional matched filter, (cg) vessel extracted from the fast guided filter at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Symmetry 14 00194 g005
Figure 6. Resulting segmented images on retina 2 from the DRV dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image; (c) original matched filter; (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Figure 6. Resulting segmented images on retina 2 from the DRV dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image; (c) original matched filter; (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Symmetry 14 00194 g006
Figure 7. Resulting segmented images on retina 4 from the DRV dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image; (c) original matched filter; (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Figure 7. Resulting segmented images on retina 4 from the DRV dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image; (c) original matched filter; (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; and s = 5, r = 25.
Symmetry 14 00194 g007
Figure 8. Resulting segmented images on retina 1 from the CDB dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image, (c) original matched filter, (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; s = 5, r = 25.
Figure 8. Resulting segmented images on retina 1 from the CDB dataset, obtained through the integrated technique of the fast guided filter and the matched filter for various values: (a) original image; (b) ground truth image, (c) original matched filter, (dh) segmented images for various values, such as at s = 1, r = 6; s = 2, r = 10; s = 3, r = 18; s = 4, r = 28; s = 5, r = 25.
Symmetry 14 00194 g008
Table 1. Evaluated results of the performance matrices for the DRV dataset using the original matched filter.
Table 1. Evaluated results of the performance matrices for the DRV dataset using the original matched filter.
Retinal ImageSnAcSpc
10.6526150.9354220.955843
20.6483580.9323190.954067
30.5863250.922530.952038
40.5777530.9324740.95722
50.5408580.9314160.961093
60.6026280.9242820.960513
70.5786390.9273060.95283
80.5643930.9198870.949272
90.5678920.9306070.960517
100.5830060.934010.956754
110.5729550.9258940.948486
120.6202530.9279190.949866
130.5546980.9254670.956691
140.6621060.9308160.948576
150.6116710.9314610.942728
160.5434860.9301430.954699
170.5923090.9271910.953884
180.612760.934250.950661
190.7090350.943220.957427
200.6684110.935710.952476
Average0.602500.930110.95372
Table 2. Evaluated results of the performance matrices for the DRV dataset using the recommended technique.
Table 2. Evaluated results of the performance matrices for the DRV dataset using the recommended technique.
Retinal ImageSnAcSpc
10.726970.9641860.988121
20.7416990.9590640.991171
30.6560360.9593210.989514
40.6882120.9622480.996576
50.6743130.9597490.994222
60.6882740.9608880.98844
70.6735210.958130.991324
80.6777350.959050.980273
90.6914510.9606760.988787
100.6804020.9593220.991595
110.6961310.9637390.993439
120.6956830.964740.98635
130.6673110.9595580.992345
140.7289050.9683530.984411
150.7852970.9611920.988134
160.6827230.9608670.991298
170.6776560.9598430.983896
180.7435360.9587710.988546
190.7861610.9662660.989534
200.72450.9605650.983755
Average0.70430.96130.9890
Table 3. Evaluated results of the performance matrices for the CDB dataset using the original matched filter.
Table 3. Evaluated results of the performance matrices for the CDB dataset using the original matched filter.
Retinal ImageSnAcSpc
10.5976230.9342280.959463
20.6036030.9225940.937231
30.6104070.9288220.952509
40.5884450.9152510.934539
50.6161660.9257960.918664
60.6290860.9221850.944564
70.5914140.9374240.966651
80.6218420.9326510.925845
90.6338950.9318580.941342
100.6218960.9385750.927733
110.6682270.9280210.937018
120.6156470.9256380.948131
130.6086670.9344790.95321
140.611610.9369050.950602
Average0.61560.92960.9426
Table 4. Evaluated results of the performance matrices for the CDB dataset using the recommended technique.
Table 4. Evaluated results of the performance matrices for the CDB dataset using the recommended technique.
Retinal ImageSnAcSpc
10.6985320.9592140.980642
20.687850.9601640.976853
30.6786930.9621760.979854
40.689540.9583570.980634
50.7586350.9672770.978648
60.6984910.9607510.984740
70.6952180.9585140.97771
80.7284920.9623010.976691
90.7574380.9581650.981965
100.6865170.9634210.980794
110.7674390.9601260.985526
120.6886180.9578160.977582
130.7242680.9614210.97387
140.75470.9581420.976738
Average0.71530.96050.9816
Table 5. Variations in accuracy with the various values of Tlow, c, and w.
Table 5. Variations in accuracy with the various values of Tlow, c, and w.
Tlowc = 0.02, w = 11
Ac
c = 0.03, w = 13
Ac
c = 0.04, w = 15
Ac
0.0100.95560.95330.9501
0.0130.95990.96150.9565
0.0160.95980.96110.9600
0.0190.95900.96010.9601
0.0220.95850.95870.9532
0.0250.95760.96010.9599
0.0280.95330.96000.9577
0.0310.96000.95570.9585
0.0340.95470.95460.9600
0.0370.95640.95640.9601
Table 6. Comparison in performance for the suggested method.
Table 6. Comparison in performance for the suggested method.
ApproachYearSnSpcAc
DRVCDBDRVCDBDRVCDB
Dash et al. [41]20200.72030.64540.98710.97990.95810.9609
Dash and Senapati [43]20200.7403--0.9905--0.9661--
AlSaeed et al. [51]20200.6312--0.9817--0.9353---
Memari et al. [50]20190.7610.7380.9810.9680.9610.93
Subudhi et al. [49]20160.3451--0.9716--0.911--
Sreejini and Govindan [52]20150.7132--0.9866--0.9633--
Chakraborti et al. [53]20140.7205--0.9579--0.9370--
Cinsdikici and Aydin [57]2009--------0.9407--
Mohammad et al. [54]2007----0.9513------
Rawi and Karajeh [55]2007----0.9422/
0.9582
------
Original matched filter 0.602500.61560.953720.94260.930110.9296
Proposed approach 0.70430.71530.98900.98160.96130.9605
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dash, S.; Verma, S.; Kavita; Bevinakoppa, S.; Wozniak, M.; Shafi, J.; Ijaz, M.F. Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction. Symmetry 2022, 14, 194. https://doi.org/10.3390/sym14020194

AMA Style

Dash S, Verma S, Kavita, Bevinakoppa S, Wozniak M, Shafi J, Ijaz MF. Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction. Symmetry. 2022; 14(2):194. https://doi.org/10.3390/sym14020194

Chicago/Turabian Style

Dash, Sonali, Sahil Verma, Kavita, Savitri Bevinakoppa, Marcin Wozniak, Jana Shafi, and Muhammad Fazal Ijaz. 2022. "Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction" Symmetry 14, no. 2: 194. https://doi.org/10.3390/sym14020194

APA Style

Dash, S., Verma, S., Kavita, Bevinakoppa, S., Wozniak, M., Shafi, J., & Ijaz, M. F. (2022). Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction. Symmetry, 14(2), 194. https://doi.org/10.3390/sym14020194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop