Abstract
Vine disease detection (VDD) is an important asset to predict a probable contagion of virus or fungi. Diseases that spreads through the vineyard has a huge economic impact, therefore it is considered as a challenge for viticulture. Automatic detection and mapping of vine disease in earlier stage can help to limit its impact and reduces the use of chemicals. This study deals with the problem of locating symptomatic areas in images from an unmanned aerial vehicle (UAV) using the visible and infrared domains. This paper, proposes a new method, based on segmentation by a convolutional neuron network SegNet and a depth map (DM), to delineate the asymptomatic regions in the vine canopy. The results obtained showed that SegNet combined with the depth information give better accuracy than a SegNet segmentation alone.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In recent years, remote sensing with unmanned aerial vehicles (UAV) for precision agriculture [1, 2] has become a field of research in rapid progress, for different agricultural applications [3], using several types of data (visible, multi or hyper spectral) [4], and in several crop types notably in the viticulture [5].
Precision viticulture is an area of research that includes many applications, such as estimating growth [6], estimating evaporate-transpiration and harvest coefficients [7], vigor evaluation [8], water stress localization [9] or diseases detection [10,11,12,13,14,15,16,17,18,19].
Vine diseases detection (VDD) is a key issue for reducing the use of phytosanitary products and increasing the grapes production. So far, there is some researches on the different imaging systems for the VDD. Certain studies use images taken at the vine leaf level [10,11,12,13,14] which can be mounted in mobile robots. Other research is carried out on aerial images taken by drones at plot scale targeting the vine canopies [15,16,17,18,19]. The VDD at the canopy level requires the vine isolation from the background. In some research works, the isolation of vine is carried out by using vegetation index (NDVI, ExG, ExGR ...), and machine learning methods. However, these methods are not always effective, especially when the vine inter-rows are covered with green grass, which can be confused with the green color of the vine and leads to misclassification of the vine and the soil. To solve this problem, the authors in [20,21,22] have used 3-dimension (3D) information to separate the soil and the vine by depth information, using the digital surface model (DSM). The results show the importance of the 3D information. However, the combination of the deep learning approach and 3D information is still less explored.
This paper presents a method based on deep learning network segmentation, combined with 3D depth information for VDD in UAV images of vineyards partially or totally covered with green or yellow grass. This method reduces confusion between different classes (healthy vine, diseased vine, green grass, yellow grass,...) and only keep the detections on the vine vegetation.
This paper is organized as follow, materials and methods are described in Sect. 2, experiences and results are presented in Sect. 3, discussion and interpretation in Sect. 4, and the conclusion in Sect. 5.
2 Materials and Methods
This section details material, and method used for data acquisition, the proposed system, the design of the vine and non-vine mask (depth map - DM), construction of the rasters, deep learning segmentation method, and finally, correction of the segmentation output.
2.1 Materials and Acquisition
The UAV which acquires image data is a Quad-copter type drone. It has a flight autonomy of 20 min, and it embeds a GPS module and two image sensors of MAPIR Survey2 model. The first sensor operates in visible spectrum (RGB), and the second one in near infrared spectrum, which records 3-band images (Near Infrared (NIR), Red and Normalized Difference Vegetation Index (NDVI)). Both sensors have a high resolution of 16 mega pixels (4608 \(\times \) 3456).
The data acquisition method is realized by flying over the vineyard parcel at an altitude between 20 and 25 m and with an average speed of 10 km/h. The drone takes images of each area of the vine plot with a resolution of 1 cm\(^{2}\)/pixel and an overlap of 70% between each image taken. The acquired images are recorded with their GPS coordinates.
2.2 Method Overview
The system proposed in this study (Fig. 1) consists of three phases. The first one creates visible and infrared mosaic images for continuous view of the vineyard, and the DM (depth map). The second one segments the visible and infrared images by SegNet architecture, and merges the information of VDD. Finally, the last process corrects the result of the segmentation by using the DM.
2.3 Depth Map
The design of the DM is carried out in two main stages, which are performed on images acquired by the drone in the two spectrum (visible and infrared). By using Agisoft Metashape software (version 1.5.5), the first step is to generate the DSM model, and the visible and infrared rasters (mosaic images), with the following steps; sparse point cloud (Tie points), dense point cloud (Dense cloud), digital surface model (DSM) and orthomosaic (Raster).
The second step is to extract the DM from the DSM model. It uses the same procedure in [21], which consists in three steps:
-
DSM filtering: the DSM is filtered by a low pass filter of size 20 \(\times \) 20, the filter has been chosen for smoothing the image and to keep only the digital terrain model (DTM).
-
Subtraction of DTM from DSM: this step eliminates variations in the terrain and keep only the height of the vineyards.
-
Contrast enhancement and thresholding: the result obtained by the subtraction has a weak dynamic of contrast. For this reason, a method for increasing the contrast based on the histogram was applied to improve the difference of the level between vine and soil. Then, an automatic thresholding (using Otsu’s Method) is applied to obtain a binary DM.
2.4 Segmentation
In our previous study [16], a VDD method were proposed using a SegNet architecture. This study gave a good results on two parcels containing a non-grassy soil (the soil of these plots was brown). However, false diseases detections were observed in the segmentation results. The aim of this study is to introduce the depth information to separate the vine vegetation and the soil (whatever the type of soil). Therefore, it filters out the soil and thus reduces the detection errors.
2.5 Correction and Fusion
The aim of the correction phase is to reduce the errors of the SegNet, using the fusion. Indeed, the segmentation result often presents confusion between the green grassy soil, and the vine vegetation, or, confusion between the discolored grassy soil and the diseased vine. The correction phase proposed in this study takes as input the result of the SegNet segmentation by fusion, and the binary DM (vine and non-vine). Each pixel of the two images is compared and corrected (if necessary) by the rules described in Table 1.
3 Experimentation and Results
This section presents experimental procedures, quantitative and qualitative results. The experiments were carried out with Python 2.7 language, using the Tensorflow 1.8.0 libraries for the SegNet architecture, and GDAL 3.0.3 for reading and writing the rasters (whole view of the plot) and their GPS information. The operating system used is Linux Ubuntu 16.04 LTS (64 bits). The programs were executed on a machine with characteristics: Intel Xeon 3.60 GHz \(\times \) 8 processor, RAM of 32 GB, and a NVidia GTX 1080 Ti graphics card with an internal RAM of 11 GB.
3.1 Depth Map
To compute depth information, or the relief, we used depth from motion approach. The acquisition step is followed by the processing step, which consists in points matching between overlapped images. Matching points are represented by a sparse 3D point cloud, followed by a second processing to obtain a dense 3D point cloud. The DM is obtained by processing the DSM, which is created from the dense point cloud. The Fig. 2 represents an example of DM map, visible and infrared image.
3.2 Correction of SegNet Segmentation
SegNet segmentation is performed on a raster images with size of 12000 \(\times \) 20000 pixels. A non overlapping sliding window of 360 \(\times \) 480 pixels, is applied on the entire raster to segment each area of the parcel (visible and infrared spectra). It takes 45 min on average for each of them. Once the two rasters (visible and infrared) are segmented, they are merged using the segmentation fusion. Then, the DM is applied to the segmented image to isolate the background class. However, one can use background subtraction before the segmentation phase. But, we found that the SegNet is more precise when using soil classes. Table 2 shows the quantitative results of the SegNet segmentation by fusion, and its correction by the DM. Figures 3 and 4 are examples of qualitative results on healthy area with green grassy soil (Fig. 3), and another example of diseased area on discolored grassy soil (Fig. 4).
4 Discussion
This research work was set out with the aim of developing efficient methods for vine disease detection. Table 2 shows the quantitative results obtained for the SegNet segmentation experiments by fusion, and the corrected segmentation by DM. The results obtained are presented in terms of recall, precision, F1-score/Dice and accuracy, expressed in percentages. As shown in the accuracy column, the corrected method gives a better rate than the uncorrected method. This improvement is due to the correction of the soil areas and the vine vegetation. Also, the reduction of the over-detections of the disease areas, which can be observed on the individual results of each class.
Figures 3 and 4 represent respectively, the qualitative results, of the DM application on an area in good health and diseased. As can be seen in Fig. 3.c, the SegNet result by fusion gave several segmentation errors, in particular the vines detection and symptoms on the soil. These errors are mainly due to the presence of green grass mixed with a light brown color of the soil, which look like the color of vine disease. Figure 3.d shows a improvement of the segmentation result after correcting this result by the 3D information. Indeed, the correction brings a better distinction of the vine lines and reduces false detection of symptoms and vine vegetation on the grassy soil.
The second SegNet result on an area contaminated by Mildew disease (see Fig. 4.c) gave an over-detection of the symptomatic areas, which overflowed on the soil. This problem does not allow to evaluate the real diseased. Also, in some cases, it can cause confusion between the vine-rows that are contaminated. After the correction (Fig. 4.d), the result shows better interpretation and distinction of the vine-rows, and the detection of symptoms is observed only on the vine, and not on the soil.
5 Conclusion
This research work was set out with the aim of developing efficient methods for vine disease detection. We have developed a new method base on the deep learning segmentation approach and 3D depth map (DM). The method consists of three steps. The first one is mosaicking the visible and infrared pictures to obtain whole view of the vineyard, and their DM. The second step segments and merges visible and infrared rasters by using the SegNet architecture. Finally, the third step consists of correction of the SegNet result using the DM. This study showed that the proposed method reduces false detections of the vine vegetation, the vine symptoms, the soil, and therefore gives better precision and estimation on the disease map.
References
Sona, G., et al.: UAV multispectral survey to map soil and crop for precision farming applications. ISPRS - Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci. XLI-B1(June), 1023–1029 (2016)
Barbedo, J.G.A.: A review on the use of unmanned aerial vehicles and imaging sensors for monitoring and assessing plant stresses. Drones 3(2), 40 (2019)
Otto, A., Agatz, N., Campbell, J., Golden, B., Pesch, E.: Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: a survey. Networks 72(4), 411–458 (2018)
Teke, M., Deveci, H.S., Haliloglu, O., Gurbuz, S.Z., Sakarya, U.: A short survey of hyperspectral remote sensing applications in agriculture. In: RAST 2013 - Proceedings of 6th International Conference on Recent Advances in Space Technologies, pp. 171–176, June 2013
Santesteban, L.G., Di Gennaro, S.F., Herrero-Langreo, A., Miranda, C., Royo, J.B., Matese, A.: High-resolution UAV-based thermal imaging to estimate the instantaneous and seasonal variability of plant water status within a vineyard. Agric. Water Manag. 183, 49–59 (2017)
Terrón, J.M., Blanco, J., Moral, F.J., Mancha, L.A., Uriarte, D., Marques Da Silva, J.R.: Evaluation of vineyard growth under four irrigation regimes using vegetation and soil on-the-go sensors. Soil 1(1), 459–473 (2015)
Vanino, S., Pulighe, G., Nino, P., de Michele, C., Bolognesi, S.F., D’Urso, G.: Estimation of evapotranspiration and crop coefficients of tendone vineyards using multi-sensor remote sensing data in a mediterranean environment. Remote Sens. 7(11), 14708–14730 (2015)
Mathews, A.J.: Object-based spatiotemporal analysis of vine canopy vigor using an inexpensive unmanned aerial vehicle remote sensing system. J. Appl. Remote Sens. 8(1), 085199 (2014)
Bellvert, J., Zarco-Tejada, P.J., Girona, J., Fereres, E.: Mapping crop water stress index in a ‘Pinot-noir’ vineyard: comparing ground measurements with thermal remote sensing imagery from an unmanned aerial vehicle. Precis. Agric. 15(4), 361–376 (2014). https://doi.org/10.1007/s11119-013-9334-5
Al-Saddik, H., Simon, J.C., Brousse, O., Cointault, F.: Multispectral band selection for imaging sensor design for vineyard disease detection: case of Flavescence Dorée. Adv. Anim. Biosci. 8(2), 150–155 (2017)
Al-Saddik, H., Laybros, A., Billiot, B., Cointault, F.: Using image texture and spectral reflectance analysis to detect Yellowness and ESCA in grapevines at leaf-level. Remote Sens. 10(4), 618 (2018)
Al-Saddik, H., Laybros, A., Simon, J.C., Cointault, F.: Protocol for the definition of a multi-spectral sensor for specific foliar disease detection: case of “Flavescence dorée". Methods Mol. Biol. 213–238, 2019 (1875)
Rançon, F., Bombrun, L., Keresztes, B., Germain, C.: Comparison of SIFT encoded and deep learning features for the classification and detection of ESCA disease in Bordeaux vineyards. Remote Sens. 11(1), 1–26 (2019)
MacDonald, S.L., Staid, M., Staid, M., Cooper, M.L.: Remote hyperspectral imaging of grapevine leafroll-associated virus 3 in cabernet sauvignon vineyards. Comput. Electron. Agric. 130, 109–117 (2016)
Kerkech, M., Hafiane, A., Canals, R.: Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 155(October), 237–243 (2018)
Kerkech, M., Hafiane, A., Canals, R.: Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 174(Apr), 105446 (2020). https://doi.org/10.1016/j.compag.2020.105446
Albetis, J., et al.: Detection of Flavescence dorée grapevine disease using Unmanned Aerial Vehicle (UAV) multispectral imagery. Remote Sens. 9(4), 1–20 (2017)
Albetis, J., et al.: On the potentiality of UAV multispectral imagery to detect Flavescence dorée and Grapevine Trunk Diseases. Remote Sens. 11(1), 0–26 (2019)
de Gennaro, S.F., et al.: Unmanned Aerial Vehicle (UAV)-based remote sensing to monitor grapevine leaf stripe disease within a vineyard affected by esca complex. Phytopathologia Mediterranea 55(2), 262–275 (2016)
de Castro, A.I., Jiménez-Brenes, F.M., Torres-Sánchez, J., Peña, J.M., Borra-Serrano, I., López-Granados, F.: 3-D characterization of vineyards using a novel UAV imagery-based OBIA procedure for precision viticulture applications. Remote Sens. 10(4), 584 (2018)
Burgos, S., Mota, M., Noll, D., Cannelle, B.: Use of very high-resolution airborne images to analyse 3D canopy architecture of a vineyard. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci. - ISPRS Arch. 40(3W3), 399–403 (2015)
Cinat, P., Gennaro, S.F.D., Berton, A., Matese, A.: Comparison of unsupervised algorithms for vineyard canopy segmentation from UAV multispectral images. Remote Sens. 11(9), 1023 (2019)
Acknowledgment
This work is part of the VINODRONE project supported by the Region Centre-Val de Loire (France). We gratefully acknowledge Region Centre-Val de Loire for its support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kerkech, M., Hafiane, A., Canals, R., Ros, F. (2020). Vine Disease Detection by Deep Learning Method Combined with 3D Depth Information. In: El Moataz, A., Mammass, D., Mansouri, A., Nouboud, F. (eds) Image and Signal Processing. ICISP 2020. Lecture Notes in Computer Science(), vol 12119. Springer, Cham. https://doi.org/10.1007/978-3-030-51935-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-51935-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-51934-6
Online ISBN: 978-3-030-51935-3
eBook Packages: Computer ScienceComputer Science (R0)