Abstract
The paper presents a method for patch classification to the end of flooded areas segmentation from aerial images. As patch descriptors color fractal dimension and color local binary patterns were proposed, so both color and texture information is combined. The remote images were taken by the aid of an Unmanned Aircraft System (MUROS) implemented by an authors’ team. The algorithm of remote image segmentation has two phases: the learning and the segmentation phase. The class representative consists of a set of intervals created in the learning phase. The classification is made by a voting criterion which takes into consideration the weights calculating from both descriptors. The results were obtained on 100 images with high resolution from the orthophotoplan created with the images taken in a real mission. The accuracy of segmentation was better than in the separate approaches (single fractal or local binary patterns descriptors).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Floods produce important and frequent economic damages in many economic domains and, especially, in agriculture (rural zones). For flood detection and evaluation on small areas, Unmanned Aircraft System (UAS) is an excellent solution [1,2,3,4]. Comparing with other solutions like the satellites, manned aircrafts and helicopter solutions, this has the following advantages: high resolution on the ground, low cost and possibility of operating on cloudy weather. Therefore, we proposed the UAS remote imaging tasks to detect the flooded zones and to evaluate the damages in agriculture. The system is able to perform the following functions: (a) the mission planning and its transmission on board unmanned aerial platform, (b) pre-flight system configuration and verification, (c) UAV launching, (d) video data acquisition from cameras installed on board, (e) real-time transmission of video data to GCS (Ground Control Station) at distance via internet, (f) storage and displaying video data to GCS, (g) aircraft command and (h) the control in several modes (manual, semiautomatic and automatic), (i) control data transmission from GCS to UAV and (j) mission review and on board transmission during the flight. By programming an optimal flight path and using a high resolution commercial camera, a great accuracy can be obtained.
Aerial image interpretation is often based on its separation in regions of interest having a specific meaning for the application. This process is named image segmentation and it is achieved as a result of the pixel classification or, sometimes, of the patch classification [4]. The images from UAV (Unmanned Aerial Vehicle) are characterized by textural aspects on different scales (water, buildings, forest, crops, soil, etc.) and therefore, image segmentation by textural properties can be used. There are many works that address the aerial texture segmentation, based on different methods, but relatively few are those that deal with chromatic textures. Original Local Binary Pattern (LBP), fractal dimension, and cooccurence matrices do not encode color information which is essential for flood detection. The color indication can improve the classification tasks [5,6,7,8]. The most authors consider the texture feature on separate color channels and interpret them separately. Color textural information can be successfully applied to a range of applications including the segmentation of natural images, medical imaging and product inspection. This approach produced far superior results when encompasses both color and texture information in an adaptive manner. Relative recently, inter-correlated information from color channels was considered [5,6,7,8]. Thus, in [5, 6] the authors proposed algorithms to evaluate the Haralik’s features [9] from a special co-occurrence matrix which take into account the occurrence of the pixel values from two different color channels. For example, the authors in [8] use so-called color co-occurrence matrix to extract the second order characteristics for road segmentation. Fractal dimension is a method widely used to capture intensity and texture information. In the past, the computation of fractal dimension of images has been defined for only binary and gray scale images. Recently, a color version has been proposed by Ivanovici and Richard [7] that capture information from the three different color channels. Color fractal dimension was applied with good results in medical applications, like: skin images of psoriasis lesions, prostate cancer [10] or melanoma.
The algorithm for color fractal dimension [7] was considered as an extension of the differential box counting algorithm for gray level [11]. Based on the fact that DBC is an important descriptor of local texture [12] we used color fractal dimension (CFD) to characterize local color texture in remote sensing. On the other hand another important local descriptor of texture is the LBP histogram [13]. The authors in [14] extended the classical method of LBP for gray scale and proposed a new methodology of LBP histogram evaluation for pairs of complementary color information.
In this paper we proposed a new method, based on combining spatial and color information provided by fractal and LBP fusion for the flood detection and segmentation on small areas like farmland. The images are acquired by an UAS, implemented by the authors in MUROS project [15], and processed at distance via GSM/internet. The contributions refer to fractal dimension evaluation for color texture in a 5D-space (fractal color dimension - FCD) and to flood segmentation based on a voting scheme which uses both FCD and color LBP). We extend LBP to the entire color space representation of images.
2 Proposed Method
For flooded areas detection we proposed a method that uses the decomposition of the orthophotoplan, created from images taken by UAV, into non-overlapping patches. Each patch is investigated upon a classification score that combines texture information from color fractal dimension and color LBP.
2.1 Color Fractal Dimension
Color fractal dimension (CFD) can be considered as an extension of the Gray Level Fractal Dimension (GFD) used for characterization of gray level fractals and also gray level textures. For example, an algorithm to evaluate GFD is the Differential Box-Counting (DBC) algorithm [11] which derives from the classical box counting algorithm. Some works [4, 12] prove that GFD has a good efficiency in classification and segmentation of monochromatic textures. For GFD calculation, a gray level relief is created. The base of the relief is the 2D (matrix) spatial position of pixels and the height is the gray level. This relief is covered with 3D boxes of sizes \( r \times r \times s \) where r is the division factor and s (1) is the box height,
where I max represents the maximum value of the gray level value and \( n \times n \) is the matrix dimension. Let be p (i, j), the maximum value of intensity (2) on the square S r (i, j) of dimension \( r \times r \) centered in (i, j):
Similarly, we considered the minimum value (3) of intensity q (i, j) on the square S r (i, j) of dimension \( r \times r \) centered in (i, j):
For each r, the differences (4) are summed over the entire image (5):
DBC is calculated as (6):
Next, the DBC algorithm behaves the same as the classical binary box counting algorithm (log-log representation of N r and r; DBC is the slope of the regression line). Usually, n and r are powers of 2 and log has the base 2.
Let us consider the RGB color space. A pixel of the image [I] in this color space can be considered as a vector (7) of 5 dimension, P (i, j):
where (i, j) represents the pixel position in the matrix representation, R (i, j), G (i, j) and B (i, j) respectively the color components on red, green and blue color channels. The color level relief is covered with hyper-boxes of sizes \( r \times r \times s_{R} \times s_{G} \times s_{B} \) where r is the division factor and s K (8) is the box height on the color channel K (K = R, G, B),
In Eq. (8) I Kmax represents the maximum value of the intensity level on channel K and \( n \times n \) is the matrix dimension. Similarly with (2) and (3) we considered the maximum (9) and the minimum (10) values on all color channels, on the square S r (i, j):
The difference (11) is used to calculate CBC (12):
In case of local texture classification by fractal approach, the CBC is approximated as the slope of the regression line in log – log representation. So, CFD is considered as local feature to classify the patches in flood – noted as water (W) – and non flood (nW) – for example, vegetation (V) and soil (S).
2.2 Color Local Binary Patterns
It is well known that LBP histogram is an efficient descriptor of textures [13]. Like the fractal dimension, LBP histogram can be extended to the color representation of images. So, the color LBP histogram is considered as local descriptor and it is used for patch classification. The color histogram is obtained by concatenating the LBP histograms on R, G and B color channels [14]. If the gray level histogram is represented in 10 points, then the color LBP histogram (CLBP) contains 30 points.
2.3 Image Segmentation
For image segmentation both color local descriptors CLBP and CFD are taken into account by a joint voting scheme. The aerial images are first integrated in an orthophotoplan and then decomposed in non-overlapping boxes (patches) of dimension 128 × 128 pixels. For each patch CLBP and CFD are calculated.
There are two phases in the classification process: the learning phase and the testing phase. In the learning phase, for each class C, from the characteristics (feature values) calculated on a learning set of patches, an interval from minimum and maximum values [Cmin, CMax] is created. In CFD case, this is a simple interval as representative for the class C, while in the LBP case it is a set of 30 intervals (each for a position in the concatenated histogram) which create a channel inside the cumulative histogram representation (Fig. 1). The upper segments in the histogram from Fig. 1 represent the intervals [min, max] obtained from the learning patch values. In the cumulative histogram, the grouping interval is of 25 values.
Based on these intervals, a voting scheme for patches classification as belonging to class C is considered. First, for CLBP criterion, a score of matches in the representative channel for this class, S CLBP (C), is established for a testing patch P. Obviously, \( S_{CLBP} (C) \in [0,30] \), or in the normalized representation (13),
Value of 1 represents a total matching of the patch CLBC histogram in the representative of the class C (30 matches).
For CFD criterion we consider a maximum weight of 1 at the middle of the interval [min, max] and the weights of 0.5 at the both ends of the interval: min and max. So we can consider linear weights for each interval \( [\hbox{min} ,\frac{\hbox{min} + \hbox{max} }{2}] \) (14) and \( [\frac{\hbox{min} + \hbox{max} }{2},\hbox{max} ] \) (15):
In Eqs. (14) and (15) x represents the CFD value for the tested patch. It can be seen that \( S_{CFD} \hbox{max} = 1 \), if \( x = \frac{\hbox{max} + \hbox{min} }{2} \). The total score for the class C is considered as the average of the two partial scores (16):
If the total score for the class C is greater than 0.5 (the maximum score is 1), then P belongs to class C (W, V or S).
3 Experimental Results
For image acquisition, a fixed-wing UAV, designed by the authors in MUROS project [15, 16], was used. The basic structure of the entire system (UAS) contains the following elements: MUROS unmanned aircraft (UAV), Ground Control Station (GCS) with internet connection, Ground Data Terminal (GDT) with internet connection to GCS, Data link and Launcher (L). Its main features are: gyro-stabilized payload, automatic navigation, GIS-based, extended operational range using multiple GCs and GDTs, remote control via Internet and mission planning software application. The camera characteristics are: objective 50 mm, 24.3 megapixels and 10 fps. The experimental model for UAV MUROS and the payload with camera for image acquisition are presented in Fig. 2.
In order to segment and evaluate the flood damage, the images taken from UAV are concatenated in on orthophotoplan and then decomposed in patches of dimension 128 × 128 pixels. In the learning phase, from the learning images (Fig. 3), a set of 20 patches -10 patches with flood (class W) and 10 patches with other different regions of interest like vegetation (V) and soil (S), representing non flood (class nW) - like in Fig. 4, is considered for defining the class W in the segmentation process.
CFD and CLBP are calculated for each learning patches and the corresponding intervals [min, max] for the W class are presented in Table 1 (CFD) and, respectively, in Table 3 (CLBP).
It can be observed that, in the learning phase the classes W and nW are well separated. For the testing phase, a set of 100 patches in W class and 100 patches in nW class was used. A set of 15 patches, as example is presented in Fig. 6 and the corresponding CFD in Table 2. Table 4 presents the result of segmentation based on separate (S CFD and S CFD ) and total (S(W)) scores for the patches presented in Fig. 5.
It can be seen that if it is considered only separate scores, with the same threshold 0.5, the patches W2_T and S1_T are misclassified (gray color), but the total score correctly classifies them. If the patches identified as belong to the class W, are marked with white, the segmentation result of flood is presented in Fig. 6 (DSC4412_S, DSC4494_S).
4 Conclusions
Color information and texture analysis can be successfully used together for segmentation and evaluation of small flooded areas from UAV – based image. The support for image acquisition was a UAS implemented by a team of authors. We introduced two local descriptors, color fractal dimension, in a 5D space, and color binary patterns histogram by concatenating the LBP histograms on the color channels (R, G and B). For each descriptor, a representative of the class flood was established as a set of intervals. The number of matching or the position inside of intervals defines the score of the testing patch. The classification, which is based on a voting scheme that combines both scores, one based on CFD and other on CLBP, gives better results than the separate criteria.
References
Feng, Q., Liu, J., Gong, J.: Urban flood mapping based on unmanned aerial vehicle remote sensing and random forest classifier-a case of Yuyao. China. Water 7, 1437–1455 (2015). doi:10.3390/w7041437
Tamminga, A.D., Eaton, B.C., Hugenholtz, C.H.: UAS-based remote sensing of fluvial change following an extreme flood event. Earth Surf. Proc. Land. 40, 1464–1476 (2015). doi:10.1002/esp.3728
Ahmad, A., Tahar, K.N., Udin, W.S., Hashim, K.A., Darwin, N., Hafis, M., Room, M., Hamid, N.F.A., Azhar, N.A.M., Azmi, S.M.: Digital aerial imagery of unmanned aerial vehicle for various applications. In: IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2013), 535–540 (2013)
Popescu, D., Ichim, L.: Image recognition in UAV application based on texture Analysis. In: Battiato, S., Blanc-Talon, J., Gallo, G., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2015. LNCS, vol. 9386, pp. 693–704. Springer, Cham (2015). doi:10.1007/978-3-319-25903-1_60
Khelifi, R., Adel, M., Bourennane, S.: Multispectral texture characterization: application to computer aided diagnosis on prostatic tissue images. EURASIP J. Adv. Sig. Proc. 118, 1–13 (2012). doi:10.1186/1687-6180-2012-118
Losson, O., Porebski, A., Vandenbroucke, N., Macaire, L.: Color texture analysis using CFA chromatic co-occurrence matrices. Comput. Vis. Image Underst. 117, 747–763 (2013)
Ivanovici, M., Richard, N.: Fractal dimension of color fractal images. IEEE Trans. Image Process. 20, 227–235 (2011)
Popescu, D., Ichim, L., Gornea, D., Stoican, F.: Complex image processing using correlated color information. In: Blanc-Talon, J., Distante, C., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2016. LNCS, vol. 10016, pp. 723–734. Springer, Cham (2016). doi:10.1007/978-3-319-48680-2_63
Haralick, R., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man. Cybern., 610–621 (1973)
Yu, E., Monaco, J.P., Tomaszewski, J., Shih, N., Feldman, M., Madabhushi, A.: Detection of prostate cancer on histopathology using color fractals and probabilistic pairwise Markov models. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3427–3430 (2011)
Sarker, N., Chaudhuri, B.B.: An efficient differential box-counting approach to compute fractal dimension of image. IEEE Trans. Syst. Man Cybern. 24, 115–120 (1994)
Chaudhuri, B.B., Sarker, N.: Texture segmentation using fractal dimension. IEEE Trans. Pattern Anal. Mach. Intell. 17, 72–77 (1995)
Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of texture measures with classification based on feature distributions. Pattern Recogn. 29, 51–59 (1996)
Porebski, A., Vandenbroucke, N., Hamad, D.: LBP histogram selection for supervised color texture classification. In: IEEE International Conference on Image Processing, Melbourne, VIC, pp. 3239–3243 (2013)
MUROS - Teamnet International. http://www.teamnet.ro/grupul-teamnet/cercetare-si-dezvoltare/muros/
Popescu, D., Ichim, L., Stoican, F.: Unmanned aerial vehicle systems for remote estimation of flooded areas based on complex image processing. Sensors 17(3), 1–24 (2017). doi:10.3390/s17030446
Acknowledgements
The work has been funded by Romanian National Authority for Scientific Research and Innovation, UEFISCDI, project SIMUL, number BG49/2016 and Data4Water H2020, TWINN 2015 Project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Ichim, L., Popescu, D. (2017). Combining Color Fractal with LBP Information for Flood Segmentation in UAV-Based Images. In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds) Image Analysis and Processing - ICIAP 2017 . ICIAP 2017. Lecture Notes in Computer Science(), vol 10485. Springer, Cham. https://doi.org/10.1007/978-3-319-68548-9_67
Download citation
DOI: https://doi.org/10.1007/978-3-319-68548-9_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68547-2
Online ISBN: 978-3-319-68548-9
eBook Packages: Computer ScienceComputer Science (R0)