Abstract
The use of thermal images of a selected area of the head in screening systems, which perform fast and accurate analysis of the temperature distribution of individual areas, requires the use of profiled image analysis methods. There exist methods for automated face analysis which are used at airports or train stations and are designed to detect people with fever. However, they do not enable automatic separation of specific areas of the face. This paper presents an algorithm for image analysis which enables localization of characteristic areas of the face in thermograms. The algorithm is resistant to subjects’ variability and also to changes in the position and orientation of the head. In addition, an attempt was made to eliminate the impact of background and interference caused by hair and hairline. The algorithm automatically adjusts its operation parameters to suit the prevailing room conditions. Compared to previous studies (Marzec et al., J Med Inform Tech 16:151–159, 2010), the set of thermal images was expanded by 34 images. As a result, the research material was a total of 125 patients’ thermograms performed in the Department of Pediatrics and Child and Adolescent Neurology in Katowice, Poland. The images were taken interchangeably with several thermal cameras: AGEMA 590 PAL (sensitivity of 0.1 °C), ThermaCam S65 (sensitivity of 0.08 °C), A310 (sensitivity of 0.05 °C), T335 (sensitivity of 0.05 °C) with a 320 × 240 pixel optical resolution of detectors, maintaining the principles related to taking thermal images for medical thermography. In comparison to (Marzec et al., J Med Inform Tech 16:151–159, 2010), the approach presented there has been extended and modified. Based on the comparison with other methods presented in the literature, it was demonstrated that this method is more complex as it enables to determine the approximate areas of selected parts of the face including anthropometry. As a result of this comparison, better results were obtained in terms of localization accuracy of the center of the eye sockets and nostrils, giving an accuracy of 87 % for the eyes and 93 % for the nostrils.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The issue of detection of characteristic face areas in thermovision concerns the analysis of thermal images of the head in screening systems which make a rapid initial selection of people with fever. Thermal cameras inform that the currently observed individual has a temperature, which may indicate that the person is ill. Such systems may be used at airports, ports, train stations, border crossings, schools in order to detect people with fever and limit the spread of diseases [25]. The first attempts to apply the method in practice took place at airports in China and Singapore during the SARS epidemic in the years 2002/2003. A similar solution was also introduced in studies conducted in hospitals in the United States where thermovision was used to confirm whether a patient has a fever [26]. Thermal imaging is now more widely used in various branches of medicine, both in the disease diagnosis and in the treatment process [6, 27].
2 Related works
In the literature, there exist studies concerning the use of head thermograms in medical applications, for example in locating diseases in the head [19]. In these applications, localization of the face and its characteristic points is always the first step in a complex process and the effectiveness of the whole system depends largely on its results. Face segmentation is also a very important and crucial element of biometric systems [8, 24].
Therefore, several studies addressed the issue of face segmentation using algorithms for image analysis and processing. Their authors, however, always pay attention to greater difficulties which occur in the face segmentation in thermograms in relation to the visible light [11, 21].
In paper [22], the authors present a method for locating the eyes and nostrils in a thermogram. This algorithm used the Haar wavelet to detect pixel clusters around the eyes and nostrils and SVM or GentleBoost classifiers (better results were obtained for the latter one) to classify points. In this process of analysis, an area of the face is initially divided into smaller “sub-windows” of a pre-specified size of 16x16 pixels for the detector and 32 × 32 pixels for the classifier. Next, the windows are analyzed using wavelets of appropriately selected sizes. The final stage described by the authors is the classification of the results using GentleBoost classifier. The nostril localization process is similar. However, the authors draw attention to the difficulties in detecting these points. To determine the position of the nostrils, they apply the Haar wavelet and GentleBoost classifier again, and the whole process is the same as for the eye detection in this method. They only use a different training set for the nostrils. In the presented solution, the areas of interest are marked only in the form of rectangles. In another paper [35], the authors assume that in the case of thermograms, the most important information in the image is in places where the signal is the strongest–places with brightness changes. To detect these points, a special detector based on the intensity of image pixels was used. On the basis of its results, groups of points with the greatest changes of intensity are designated. In the next step, these points are clustered (using the k-means method) and their belonging to particular characteristic regions is determined. On the basis of their coordinates, the authors marked rectangular areas of interest including the eyes and nostrils.
In conclusion, a significant problem in the described systems is an automatic image analysis (performed without operator intervention) designed to determine characteristic points of the face but in combination with the precise identification of areas of interest. Therefore, the primary objectives of this study were to present major features of the proposed method, evaluate the efficiency of the proposed algorithm (by using the method described in [22]) and compare the results to [22]. Another element was to consider the use of this method for fever detection (whether the tested subject has a fever) by measuring temperature in selected areas. The possibility of automatic segmentation of the areas of the face (as defined both by the characteristic points and their corresponding areas) enables to consider the possibility of using this method in screening systems, which could meaningfully improve the process of selecting people with higher temperature in the head area.
The first version of the proposed method and its preliminary results were presented by the authors in [23]. The authors signalled the problems that may occur during thermogram segmentation and attempts to eliminate them. In this paper, some of the algorithm blocks have been developed and modified so as to achieve the greatest efficiency and versatility. In order to compare the results with reference solutions, the method for evaluating the effectiveness of facial characteristic point localization [22] was implemented, which helped to assess the effectiveness of the test set of thermograms. Moreover, using the proposed solution as a comprehensive system for detecting people with elevated temperature in the head area was considered. For this reason, the greater part of the Methodology section is similar to [23].
3 Material–collection of thermograms
The research material was a total of 125 patients’ thermograms performed in the Department of Pediatrics and Child and Adolescent Neurology in Katowice, Poland. The patients’ ages ranged between 15 and about 40 years, weight from 15 to 98 kg. The number of patients was 98. In the case of 22 people, 2–4 images were taken per person, whereas in the other cases one thermogram was taken for each individual. The images were acquired interchangeably with several thermal cameras: AGEMA 590 PAL (sensitivity of 0.1 °C), ThermaCam S65 (sensitivity of 0.08 °C), A310 (sensitivity of 0.05 °C), T335 (sensitivity of 0.05 °C) with a 320 × 240 pixel optical resolution of detectors. The principles related to taking thermal images for medical thermography were maintained [29], i.e.:
-
adequate preparation of the test room (temperature 21 °C, humidity 45 % to 55 %, lack of both radiation sources and air flow),
-
measurements at a distance of about 1 m from the tested patient,
-
proper preparation of the patient for the test: taking off all the bands, washing off any makeup, etc.
The method of taking images is presented in Fig. 1. The optical axis of the camera was positioned at eye level and the camera was placed at a distance of 1 m from the tested patient.
Figure 1 presents demonstratively adopted assumptions concerning the geometry of the face and the localization of areas subject to processing. Red hatched areas of the image (Fig. 1) are determined automatically based on the designated size of the head yG and on the analysis with a detector, hereinafter referred to as TPattern (described in detail below in this paper).
Appropriate values rSK, x0, y0 were determined according to the known anthropometric relationships (the golden ratio) in conjunction with an automatically designated head height yG. Then, localization of individual parts of the face was carried out in the found area including the head.
In order to verify the sensitivity of the proposed algorithm to affine transformations (translation and rotation), additional tests were performed (increasing the analyzed group of images to more than 7,000 by introducing artificial rotation and translation of the figure in the image). In the following presentation of thermal images, an artificial color palette was used as well. It is often used in the presentation of thermograms, where the brightness of each pixel of the image depends on the temperature of a specific point of the tested object. Intuitively bright yellow pixels indicate a high temperature and dark blue ones indicate a low temperature.
Figure 2 presents selected areas of the face with temperature measurements and analysis presented later in this paper.
Figure 2 presents characteristic points and face areas marked manually. These elements are localized by the algorithm. This process is described in the subsequent sections of the paper. Detection of the presented areas is subject to the analysis described in this paper. Based on the selected areas, such properties as minimum and maximum values or mean temperature in the area and circuit length are calculated.
The block diagram with the examples of the analyzed thermogram has been presented—Fig. 3. The main steps are presented in the form of images. The individual blocks are described below.
4 Methodology
4.1 Initial image processing
Initial image processing is applied to determine the approximate position of the head, its potential center, shape and height. In previously published studies and publications on similar subjects e.g. [19], initial image standardization and elimination of background interference were carried out based on a binary image for the constant binarization threshold TU = 28.3 °C. When this value was directly applied in the studies presented in the paper, in individual cases there were problems with the appointment of a proper outline of the head, determination of its size and temperature threshold in order to avoid the impact of hair. Attempts to apply Otsu’s method used in the literature [19] were also not fully satisfactory. Consequently, attempts were made to automate the process of temperature selection taking into account the properties of the designated areas of the image after the thresholding operation.
Automatic selection of the temperature and binarization threshold rests on the assumption that the analyzed area of the patient’s head should be the largest possible area containing no holes and covering no more than 60 % of the entire image surface. In the process of determining the binarization threshold, the algorithm analyzes the specified temperature range with the step ∆t = 0.25 °C. For each i-th binary image L B(i) designated in this way, coefficient w (i) is determined i.e.:
where:
- P 0 :
-
surface of the tested image sized M-lines × N-columns
- P M(i) :
-
surface of the biggest area created after thresholding for the temperature t (i)
- E (i) :
-
Euler number for a sequence of images created after thresholding
- I (i) :
-
number of areas in the analyzed image after thresholding operation.
As a result of calculating the value w (i), the algorithm determines the area of the patient’s head, which is the smallest area without holes (minimum value w (i)). The adopted temperature t (i) is in the range from the minimum value occurring in the image to TU. It is a fully automatic process, the operator does not set any parameter value (in this case the binarization threshold). The position of the head determined in this way (the lowest value w (i)) represents the final stage of initial image processing. Ideal situations are cases when only one area without holes is formed after binarization and the value of I(i) is equal to 1. The algorithm attempts to obtain the results (or similar ones) where the value of PM(i) is the minimum (without holes), and the values of E(i) and I(i) also reach the minimum. The cases in which e(i) is equal to zero are eliminated; the algorithm ignores them searching further for another value of the index i. In some cases, after this operation, there can occur several areas of similar size (but only one is the area of the head). TPattern, which is described below, is used to classify the relevant area. A characteristic distribution of intensity (temperature) occurring in an image consistent with the assumptions adopted when designing TPattern helps in the classification process by determining the correct position of TPattern for the maximum matching.
The next step is to determine the orientation of the head and various areas of the face.
4.2 Analysis of the head area by TPattern
Based on the data related to the size of the head and anthropometric relationships of the face and analyzing the examples shown in the literature [3, 9, 19, 32] a detector hereinafter referred to as TPattern was prepared—Fig. 4. TPattern detector (first version proposed in [23]) was expanded by adding the function of analysis of a number of possible positions of the eye area (represented by extra arms/sections of TPattern) whose position is determined by rSK*DLO where DLO = {0.15, 0.2, 0.25} (so far, the authors have used the value of 0,2 and a single segment representing the eye line). With this modification, the results obtained by means of TPattern have been improved and the algorithm/method has worked correctly in the entire set of thermograms.
The basic value of TPattern is its arm rSK (Fig. 4). This value is determined based on the height of the head yG and anthropometric data and is r SK = 0.3 ⋅ y G . The arm rSK enables automatic detection of areas of the eyebrows, nose and eye sockets. In the construction of this detector, it was assumed that characteristic areas comply with the following relationships:
-
brightness of the eyebrow area is lower than the area of the upper part of the eye sockets,
-
there is symmetry in the distribution of the line brightness of the eyebrow and eye socket areas,
-
the brightness of the nose area is lower than that of the eye socket area.
When the eyebrow line and the nose symmetry axis are precisely determined, it is possible to narrow down the search area in detection of the eye sockets and nose. Figure 4b and c show the correct detection results of the head orientation for its different inclination values relative to the axis 0y. As mentioned above, based on the orientation of the segment y G and its length, TPattern adopts initial values for further analysis: r SK and additionally the angle α G . For reasons of computation time, the analysis with TPattern is performed in two ranges of image resolution:
-
1:2 scale, preliminary stage, the angle α TS1 is pre-determined i.e.: α G -10° ≤ α TS1 ≤ α G + 10° with the step ∆α TS1 = 2°.
-
1:1 scale, verification stage, the following value is determined: α TS1 -5° ≤ α TS2 ≤ α TS1 + 5° with the step ∆α TS2 = 1°.
where:
- α G :
-
orientation of the head determined based on the section which is the height of the head
- α TS1 :
-
TPattern orientation angle in 1:2 scale–preliminary stage
- α TS2 :
-
TPattern orientation angle in 1:1 scale–verification stage.
The operation principle of this part of the algorithm is shown in Fig. 5.
Initially, TPattern (in 1:2 scale) analyzes the image section (marked with white dots in Fig. 5) in the specified range of 20° with respect to αG. Then, in the final step of determining the orientation in 1:1 scale, TPattern analyzes the range of 10° with respect to the set orientation angle α TS1 . Based on the declination angle of the head αTS2 (determined in this way) and point OG (Fig. 4), standardization of the head orientation and position takes place. As a result, in the subsequent stages, the algorithm analyzes the area of the head in an upright position, which greatly accelerates the process of analysis.
Active contour (balloon model) used at this stage of the algorithm operation was built from 36 nodes. It was initialized as a circle whose center is at the point marked in blue—Fig. 3c. Active contour grows until the nodes reach the contour of the head. The nodes move as shown in Fig. 3d. The value of the active contour energy is determined using a simple Eq. (2). The algorithm looks for the case where the energy has the maximum value.
where:
- Econtour(i) :
-
Euclidean distance between subsequent active contour nodes–D—Fig. 6
- Eimage(i) :
-
pixels value in the binary image after edge detection
- i:
-
index of active contour nodes—Fig. 6.
Another limitation of the active contour is its stiffness determined by the values of the angles existing between three subsequent nodes. According to the golden-ratio proportions of the ideal face [13], an ideal ellipse representing the head was prepared. Based on the values of the angles for an ideal ellipse, it was assumed that the angles α(i) between three subsequent nodes of the active contour should have a value between 165 and 180°.
4.3 Detection of the eye socket areas
After successful detection of the eyebrows, nose and head orientation, the next step is detection of the eye socket areas and their centers. Due to warmer interior of the eye sockets in comparison to their surroundings [19, 31], an edge detection method can be used to determine their outline. The best results were obtained using the Canny edge detection method. Previously obtained information about the size and position of the eyebrow lines allowed to determine the potential search area for eye sockets. The Hough Transform was used to detect the eye sockets. It is used in this kind of tasks, but most often in visible light [16, 28, 30, 34]. The biggest disadvantages of this method, that is, problems with the selection of ellipse sizes and with their analysis in the case of rotation, were here eliminated. In calculations, socket ellipse size ranges were adopted depending on the value r SK. Shaft sizes for ellipses in the ranges x O (longer horizontal line) and y O (shorter vertical line) (Fig. 1) were calculated in the following way:
As a result of segmentation of the area including the eyes and eyebrows (Fig. 1b), the algorithm determines eye socket centers and the maximum size of the ellipses. On this basis, the localization and outline of the areas are marked (W OL , W OP ), which is shown in Fig. 2b. The algorithm presented in [23] has been improved by the method of curve calculation. To obtain precise segmentation of curves representing the eyebrows, the algorithm uses a portion of the ellipses representing the eye sockets. The sum of the intensity of image points included in the curve corresponding to the right and left eyebrows is analyzed in order to determine the minimum value. On the basis of this value, the final position of the eyebrows is determined—Fig. 9a, b and c–bottom images.
4.4 Nose detection
Detection of characteristic points of the nose in the case of thermal images is a more complex process than in the case of visible light, since the methods proposed for visible light such as the detection method of the nostrils [32] or their axis of symmetry [33] cannot be applied here. The detection examples used in thermovision and based on the brightness (temperature) in the nose area [35] were also not ideal. The use of the described detector, TPattern, enables to pre-determine the position of the nose, with a precision close to manual localization (Fig. 1–the red rectangle covering the nose). Therefore, the height and symmetry axis of the nose (Fig. 7) and thus the search area can be relatively easily determined using the structural elements SE NL and SE NP shown in Fig. 7d. Extreme lower left and right points of the nose were taken as the characteristic ones–Fig. 7a. Figure 7b shows the points determined by the algorithm and two more which enable to mark the upper range of the paranasal sinuses, i.e. points O NLG and O NPG . The next image (Fig. 7c) shows detection of characteristic points along with marking the area of the sinuses. Specially prepared structural elements SE NL and SE NP (Fig. 7d, which were mentioned above, are used for the detection of point pairs O NL and O NP . Distribution pattern of brightness (temperature) and the relative position of the left and right elements (the distance D) are taken into account. The analyzed ranges are determined based on the localization and distance of the eye socket centers.
In order to detect the remaining points required for marking the sinuses, the algorithm also determines the points O NLG and O NPG, which are the intersection of the segment AB (coinciding with the lowest point of the eye sockets) with the nose side axes joining the point O G with O NL and O G with O NP (Fig. 7c). An outline of the sinuses is defined by a spline curve. Proper detection of the position of the nose points (O NL , O NP ) enables to determine the area of paranasal sinuses. Additionally, it enables verification of the symmetry axis of the face which is pre-determined by TPattern.
In relation to the research presented earlier [23], after expanding the set of analysed thermograms it was necessary to modify the structural elements SENL and SENP used to locate characteristic points of the nostrils (ONL i ONP). Additionally, in the studies presented here, the algorithm has been extended to enable segmentation and analysis of the nasal areas WNL and WNP—Fig. 2, which in the previous studies was not considered. This part is new in this paper compared to [23].
4.5 Forehead detection
Forehead detection is based on information obtained in the previous steps of the algorithm concerning the localization of TPattern. Forehead detection algorithm receives as input parameters a designated brow line and upper pre-determined outline of the forehead area (Fig. 1b) determined according to the size of TPattern–rSK. Based on the histogram analysis, the most common pixels occurring in the forehead area were identified and the threshold values were set. These values enable to pre-determine the area of the forehead without hair. Then, in order to ultimately determine the area of the forehead and the hairline, verification was used based on the standard declination of the area sized [5 × 5] for each pixel of the potential forehead area. Once the hairline is properly detected, it is also important to determine symmetrical areas of the forehead so that they can be compared to each other (e.g. to determine temperature differences between the left and right sides of the forehead). The symmetry axis of the forehead is accurately determined based on the coordinates of the points O NL , O NP —Fig. 7a). Then, as a result of the AND operation between the images of the right and left sides of the forehead, its proper common outline is determined. Proper segmentation of the forehead may be useful in a number of potential applications.
Examples of applying the analysis of the forehead areas in lie detection using thermal imaging and in the improvement of the algorithms for recognition and even determination of the subjects’ gender in visible light can be found in the literature [15, 37, 38].
5 Results and discussion
Different areas of the face designated according to the presented relationships were the basis for the analysis of temperature distributions. Comparing the temperature of the symmetrical areas of the face, the results were reproducible for the studied cases and a variable head position. Examples of the calculated mean values from the characteristic areas are given in Fig. 8. The included results indicate that despite the variable setting of the head and the distance of the patient from the camera, the difference between mean temperatures of the areas (W CL and W CP , W OL and W OP , W ZNL and W ZNP , W NL and W NP , respectively—Fig. 2) does not exceed 0.3 °C.
Figure 8 shows the results of mean temperature measurements for the designated areas for the thermogram obtained in different ways. Designation of areas has been adopted in accordance with Fig. 2b. Figure 8 confirms the characteristics of the algorithm, such as:
-
correct localization of medically important areas despite the changes in the position of the head–rotation and removal of the patient from the thermal camera,
-
reproducibility of calculations for medically significant temperature differences of symmetrical areas lower than 0.5 °C.
Below, there is a comparison with other authors’ results.
5.1 Comparison with other authors’ results
The presented method of analysis of thermal images of the head and a fully automatic determination of characteristic areas were compared with two other studies (Martinez B., Binefa X. [22], Trujillo L., Olague G., Hammoud R., Hernandez B [35]). These are the only found papers closely related to the discussed issue. Other known studies are not associated with a complete analysis of the above-mentioned areas of the face, they are not fully automatic or they are used in visible spectrum [3, 19, 32] etc.. Table 1 presents the solutions similar in terms of the subject to the presented algorithm along with a brief discussion of their possibilities.
A detailed description of each method is shown below.
-
Method 1.
In paper [35], the authors did not specify the method for assessing the effectiveness of localization of the facial characteristic points, because the results were directly applied to the recognition of emotions from a thermogram. Therefore, the comparison criterion may be the fact that in the solution proposed here the authors were able to accurately determine both the facial characteristic points and the areas with regard to their anthropometric size, which in the case of method 1 described by Trujillo L., Olague G. [35] was not shown.
-
Method 2.
In the other paper [22], the authors focused on the process of localization of facial characteristic points in the thermogram. Hence it was possible to try to compare the results. In their studies, a smaller set of images was used and the authors did not describe the algorithm operation for changing the position or orientation of the head. However, they proposed a method for evaluating the effectiveness of the algorithm and determined the criteria accepted as the correct localizations. To evaluate the effectiveness of the process, it was necessary to mark the points (eye centers and nostrils) manually (which is regarded as a model) and then compare with the results obtained with the segmentation algorithm described by the authors. As a result, in Method 2, the authors proposed a method for evaluating the error in the localization of the eyes and nostrils δ en on the basis of the following formula (5) after the standardization of variables according to the adopted nomenclature (Fig. 2a):
$$ \begin{array}{l}{\delta}_{en- eye}=\frac{ \max \left(\left\Vert {O}_{OL}-{\widehat{O}}_{OL}\right\Vert, \left\Vert {O}_{OP}-{\widehat{O}}_{OP}\right\Vert \right)}{\left\Vert {O}_{OL}-{O}_{OP}\right\Vert}\hfill \\ {}{\delta}_{en- nostril}=\frac{ \max \left(\left\Vert {O}_{NL}-{\widehat{O}}_{NL}\right\Vert, \left\Vert {O}_{NP}-{\widehat{O}}_{NP}\right\Vert \right)}{\left\Vert {O}_{NL}-{O}_{NP}\right\Vert}\hfill \end{array} $$(5)where:
O OL/NL , O OP/NL are reference positions marked manually as the center of the left and right eye/nostril, the symbol „^” refers to the center of the left and right eye/nostril coordinates automatically determined by the algorithm, ||.|| is the standard.
The authors of the compared method 2 treated the situation when the error δ en is less than δ en ≤ 0.15 as the correct localization. In order to compare the algorithm presented here (method 3) with the results for method 2 (described in paper [22]), this dependency of error assessment was implemented. To carry out a full comparison, it was necessary to specify manually the reference coordinates of the points in a collected set of 125 thermograms.
The results are shown in Table 2. It can be observed that the algorithm developed in the paper (method 3) achieves greater efficiency for the criterion δ en ≤ 0.15. It is worth emphasizing that the values presented in Table 2 (obtained with the use of the described algorithm) were designated for various images including cases of variable orientation and scale of the patient.
In addition, it should be noted that apart from a significantly improved localization of the facial characteristic points in Method 3, the algorithm selects the areas corresponding to the selected ones. Determination of multiplicity of these areas was achieved by taking into account the anthropometric properties and setting the size and area of the head in the early stages of the algorithm operation.
In summary, the proposed algorithm offers new possibilities in comparison to the other methods:
-
enables selection of the faces areas and characteristic points in different scales and orientation,
-
does not require any prepared training sets,
-
size and range of the extracted areas reflect the changes in the size of the examined object,
-
evaluation of results based on the equation proposed in [22] indicates the advantage of the proposed method for the value δ eye/nostril ≤0.15 adopted in [22] (5).
Figure 9 shows the final results of segmentation of the three thermograms. High effectiveness and accuracy of the presented method for changing the patient’s position and orientation can be observed in the images. Upper section of Fig. 9 presents TPattern detection for each thermogram. Bottom section presents the head region with precisely marked areas and facial characteristic points after normalization of the position, scale and rotation. They are examples of thermograms from Fig. 8a, b and c) in 2 variants: when the head was in a position close to the vertical one Fig. 9a, b and α = − 21° Fig. 9c. The presented method for detection and localization of facial elements can be used not only as a tool to measure the temperature in the thermograms. It also seems possible to use it in related fields where attempts to analyze thermograms are already being made, e.g.: as the first step in the process of face recognition in biometric systems [2, 5, 14, 17, 18, 31] where the main task could be localization of the face features for comparison and recognition. Moreover, it can be used to improve the methods of face analysis in the visible light [1, 4, 10, 12, 36] to enable observation in poor illumination and finally in the design of computer-human interface [7, 20] to track human faces in infrared images.
6 Conclusions
The paper presents a method for automatic segmentation of faces in thermograms used to determine the location and characteristic areas of the human head. Owing to the applied solutions, the process is completely automatic.
The algorithm does not require the user control and calculates the threshold temperature values in order to extract an optimal human silhouette. At this stage, the head area is determined and the influence of clothing or hairstyle is eliminated. TPattern detector prepared by the authors accurately determines the head orientation and locates its center, which enables normalization of the analyzed head area for further calculations. Thus, the process of segmentation of the face characteristic areas is considerably easier. The effectiveness of the algorithm has been calculated based on the criteria proposed in the literature and reached higher values than the reference solution. The results are presented in graphical form. Diversified collection of thermal images made it possible to test and observe the method in typical real situations. Based on the above findings, we believe that the proposed method may be useful in a number of areas such as medicine and biometrics.
References
Arandjelovic O, Hammoud R, Cipolla R (2006) On person authentication by fusing visual and thermal face biometrics. Video and Signal Based Surveillance. IEEE International Conference, Sydney, pp 50–56
Bauer J, Mazurkiewicz J, Podbielska H (2006) Thermovision in biometrics face recognition based on thermal imaging. Termowizja w biometrii rozpoznawanie twarzy na podstawie obrazu termicznego, Inżynieria Biomedyczna Acta Bio-Optica et Informatica Medica 12(2):85–88
Chan YH, Abu-Bakar SAR (2004) Face detection system based on feature-based chrominance color information, Proceedings of the International Conference on Computer Graphics, Imaging and Visualization (CGIV’04), pp. 153–158
Chen X, Flynn PJ, Bowyer K (2005) IR and visible light face recognition. Computer Vision and Image Understanding 99(3):332–358
Chen YT, Wang MS (2002) Human face recognition using thermal image. Journal of Medical and Biological Engineering, 22(2), Department of Engineering Science, National Cheng Kung University, pp.97–102
Diakides NA, Balcerak R, Lupo J, Diakides N, Diakides M, Paul JP (2006) Advances in medical infrared imaging. Medical Devices and Systems, CRC Press, Chapter 19, pp.19.1–19.14
Eveland CK, Socolinsky DA, Wolff LB (2003) Tracking human faces in infrared video. Image and Vision Computing 21(7):579–590
Filipe S, Alexandre LA (2013) Thermal infrared face segmentation: (2013) A new pose invariant method. 6th Iberian conference on pattern recognition and image analysis. Springer, LNCS, Madeira
Guangda H, Du CS (2003) Feature points extraction from faces. Research Institute of Image and Graphics, Department of Electronic Engineering, Tsinghua University, Beijing, pp 154–158
Hanif M, Ali U (2007) Optimized visual and thermal image fusion for efficient face recognition. Proceedings of 9-th International Conference on Information Fusion Florence, Italy, pp 1–6
He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacian faces. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(3):328–340
Heo J (2003) Fusion of visual and thermal face recognition techniques: A comparative study. The University of Tennessee, Knoxville, http://imaging.utk.edu/publications/papers/dissertation
Hjelmas E (2001) Face detection: A survey. Computer Vision and Image Understanding 83(3):236–274
Hu H (2008) Orthogonal neighborhood preserving discriminant analysis for face recognition. Pattern Recognition 41(6):2045–2054
Ji Z, Lian XC, Lu BL (2009) Gender classification by information fusion of hair and face, state of the art in face recognition. INTEH 2009:215–230
Kawaguchi T, Rizon M, Hidaka D (2005) Detection of eyes from human faces by Hough transform and separability filter, Electronics and communications in Japan (Part II Electronics), vol. 88, no. 5, pp. 29–39
Kobel J, Suchwałko A, Podbielska H (2002) Application of thermal imaging for human face recognition. Opt Appl 32(4):653–664
Kong SG, Heo J, Abidi BR, Paik J, Abidi MA (2005) Recent advances in visual and infrared face recognition—a review. The Journal of Computer Vision and Image Understanding 97:103–135
Koprowski R, Wojaczyńska-Stanek K, Wróbel Z (2007) Automatic segmentation of characteristic areas of the human head on thermographic images. Machine Graphic & Vision 16(3/4):251–274
Krotosky SJ, Cheng SY, Trivedi MM (2004) Face detection and head tracking using stereo and thermal infrared cameras for “Smart” airbags: A comparative analysis. Intelligent Transportation Systems. Proceedings. The 7th International IEEE Conference on, pp.17–22
Liu Q, Tang X, Lu H, Ma S (2006) Face recognition using kernel scatter-difference-based discriminant analysis. IEEE Transactions Neural Network 17(4):1081–1085
Martinez B, Binefa X, Pantic M (2010) Facial component detection in thermal imagery. Computer vision and pattern recognition workshops (CVPRW). IEEE Computer Society Conference, San Francisco, pp 48–54
Marzec M, Koprowski R, Wróbel Z (2010) Detection of selected face areas on thermograms with elimination of typical problems. Journal of medical informatics & technologies 16:151–159
Mekyska J, Espinosa-Duro V, Faundez-Zanuy M (2010) Face segmentation: A comparison between visible and thermal images. IEEE 44th international Carnahan conference on security technology ICCST 2010, San Jose
NG EYK (2010) Advanced integrative thermography in identyfication of human elevetaed temperature. Advances in biomedical research. The 7th WSEAS International Conference on Mathematical Biology and Ecology (MABE’10), University of Cambridge, pp. 190–196
Nguyen V, Cohen NJ, Lipman H, Brown CM, Molinari NA, Jackson WL, Kirking H, Szymanowski P, Wilson TW, Salhi BA, Roberts RR, Stryker DW, Fishbein DB (2010) Comparison of 3 infrared thermal detection systems and self-report for mass fever screening. Emerging Infectious Diseases 16(11):1710–1717
Nowakowski AZ (2006) Advances of quantitative IR-thermal imaging in medical diagnostics. 8th International Conference on QIRT, Padova
Rajpathak T, Kumar R, Schwartz E (2009) Eye detection using morphological and color image processing, 2009 Florida conference on recent advances in robotics. FCRAR 2009:1–6
Ring EF, Ammer K, Jung A, Murawski P, Więcek B, Żuber J, Zwolenik S, Plassmann P, Jones C, Jones B (2004) Standardization of infrared imaging. 26th Annual International Conference of the IEEE Engineering in Medicine & Biology Society 2004, San Francisco, pp. 1183–1183
Rizon M, Yazid H, Saad P, Shakaff YA (2005) Object detection using circular Hough transform American. Journal of Applied Sciences 2(12):1606–1609
Socolinsky D, Selinger A (2004) Thermal face recognition in an operational scenario. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.2, pp. II/1012–II/1019
Sohail ASM, Bhattacharya P (2006) Detection of facial feature points using anthropometric face model, IEEE International Conference on Signal-Image Technology and Internet-Based Systems, Tunisia, Springer Lecture Note Series in Computer Science (LNCS), pp 656–665.
Szlavik Z, Sziranyi T (2004) Face analysis using CNN-UM, Proceedings IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA 2004), pp. 190–195
Toennies KD, Behrens F, Aurnhammer M (2002) Feasibility of hough-transform-based iris localization for real-time-application. Proceedings of International Conference on Pattern Recognition (ICPR’02), volume II, pp.1053–1056
Trujillo L, Olague G, Hammoud R, Hernandez B (2005) Automatic feature localization in thermal images for facial expression recognition. Computer Vision and Pattern Recognition – Workshops, IEEE Computer Society Conference, pp.14–21
Wang J, Sung E (2007) Facial feature extraction in an infrared image by proxy with a visible face image. IEEE Transactions on Instrumentation and Measurement 56:2057–2066
Yacoob Y, Davis L (2005) Detection, analysis and matching of hair, Computer vision, 2005. ICCV 2005. Tenth IEEE International Conference on, vol. 1. pp. 741–748
Zhu Z, Tsiamyrtis P, Pavlidis I (2007) Forehead thermal signature extraction in lie detection, engineering in medicine and biology society, EMBS 2007. 29th Annual International Conference of the IEEE, Lyon, pp. 243–246
Acknowledgments
This work was supported by the European Union, Innovative Economy Programme, European Fund for Regional Development “Intelligent Information System for Global Monitoring, Detection and Threat Identification”, Project number: POIG 01.01.02-00-062/09-00.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Marzec, M., Koprowski, R., Wróbel, Z. et al. Automatic method for detection of characteristic areas in thermal face images. Multimed Tools Appl 74, 4351–4368 (2015). https://doi.org/10.1007/s11042-013-1745-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-013-1745-9