Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Next Article in Journal
Fiber-Optic Fabry-Pérot Interferometers for Axial Force Sensing on the Tip of a Needle
Next Article in Special Issue
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform
Previous Article in Journal
Enhancement of Fluorescence-Based Sandwich Immunoassay Using Multilayered Microplates Modified with Plasma-Polymerized Films
Previous Article in Special Issue
Small UAS-Based Wind Feature Identification System Part 1: Integration and Validation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(1), 33; https://doi.org/10.3390/s17010033
Submission received: 3 September 2016 / Revised: 14 December 2016 / Accepted: 19 December 2016 / Published: 25 December 2016
(This article belongs to the Special Issue UAV-Based Remote Sensing)

Abstract

:
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

1. Introduction

Real-time multi-target localization plays an essential and significant role in disaster emergency rescue, border security and so on. Over the past two decades, considerable research efforts have been devoted to multi-target localization. UAV electro-optical stabilized imaging systems are equipped with many kinds of sensors, including visible light cameras, infrared thermal imaging systems, laser range finders and angle sensors. Target localization needs to measure the attitude of the UAV, the attitude of the electro-optical stabilized imaging system and the distance between the electro-optical stabilized imaging system and the target.
The target localization methods from UAVs are divided into two categories. One category is target localization using a group of UAVs [1,2,3,4,5]. The other category is target localization using a single UAV [6,7,8,9,10]. This research aims to improve the effectiveness and efficiency of target localization from a single UAV. Particularly, this research proposes a new hybrid target localization scheme which integrates both zoom lens distortion correction and an RLS filtering method. The proposed scheme has many unique features which are designed to geo-locate targets rapidly.
Previous studies on geo-locating targets from a fixed-wing UAV have several limitations. Deming [1] described a probabilistic technique for performing multiple target detection and localization based on data from a swarm of flying optical sensors. Minaeian [2] described a vision-based crowd detection and Geographic Information System (GIS) localization algorithm for a cooperative team of one UAV and a number of unmanned ground vehicle (UGV)s. Morbidi [3] described an active target-tracking strategy to deploy a team of unmanned aerial vehicles along paths that minimize the uncertainty about the position of a moving target. Qu [4] described a multiple UAV cooperative localization method using azimuth angle information shared between the UAVs. Kwon [5] described a robust, improved mobile target localization method which incorporates the Out-Of-Order Sigma Point Kalman Filter (O3SPKF) technique.
In [1,2,3,4,5], target location methods based on data fusion technology have to use a group of UAVs. Target localization using a group of UAVs has some issues, including the high computational complexity of data association, complexity of UAV flight plans, difficulties in efficient data communication between UAVs and high maintenance costs due to the use of multiple UAVs. This paper presents a method for determining the location of objects using a gimbaled EO camera on-board a fixed-wing unmanned aerial vehicle (UAV). We focus on geo-locating targets using a single fixed-wing UAV due to the low maintenance costs. A single fixed-wing UAV (as opposed to rotary wing aircraft) has unique benefits including adaptability to adverse weather, good durability and high fuel efficiency.
In [6], Yan used absolute height above sea level of a UAV to geo-locate targets. In contrast, our system focuses on geo-locating targets in the video stream and does not require absolute height above sea level of the UAV data. In [7], Ha used scale invariant feature transform (SIFT) to extract feature points of the same target in different frames. The author calculated the relative height between the target and the UAV by three dimensional reconstruction. The location accuracy depends on the accuracy of this three dimensional reconstruction. The method thus requires a large amount of computation. In contrast, the accuracy of our system compared to that described in [7] is almost the same, but our system has a great advantage in reduced computational complexity, so our system can geo-locate 50 targets at the same time and improve the efficiency of multi-target localization. This is very important in military reconnaissance and disaster monitoring applications which require good real-time performance.
In [8], Barber introduced a system for vision-based target geo-localization from a fixed-wing micro-air vehicle. In flight tests, the UAV geo-locates the stationary target when the UAV orbits the targets. In [9], the UAV flies in an orbit in order to improve the geo-location accuracy. In [10], the authors assume that the UAV’s altitude above the target is known. The target’s altitude is obtained from a geo-referenced database made available by the Perspective View Nascent Technology (PVNT) method. In [11], target-location need an accurate geo-referenced terrain database. In [12], all information collected by an aerial camera is accurately geo-located through registration with pre-existing geo-reference imagery. In contrast, our system focuses on geo-locating a specific object in the video stream and does not require any preexisting geo-referenced imagery.
In all the above references, the UAVs are equipped with fixed-focal lenses. The authors do not take into account the effect of zoom lens distortion on multi-target localization. Many electro-optical stabilized imaging system are equipped with zoom lenses. The focal length of a zoom lens is adjusted to track targets at different distances during the flight. The zoom lens distortion varies with changing focal length. Real-time zoom lens distortion is impossible to correct by using calibration methods because the large amount of transformation calculation has to be repeated when the focal length is changed.
The primary contributions of this paper are: (1) the accuracy of multi-target localization has been improved due to the combination of a real-time zoom lens distortion correction method and a RLS filtering method using embedded hardware (a multi-target geo-location and tracking circuit board); (2) UAV geo-locates targets using embedded hardware (the multi-target geo-location and tracking circuit board) in real-time without orbiting the targets; (3) 50 targets can be located at the same time using only one UAV; (4) the UAV can geo-locate targets without any pre-existing geo-referenced imagery, or a terrain database; (5) the circuit board is small, and therefore, can be applied to many kinds of small UAVs; (6) multi-target localization and tracking techniques are combined, therefore, we can geo-locate multiple moving targets in real-time and obtain the target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
The rest of paper is organized as follows: Section 2 briefly presents the overall framework of the multi-target localization system. Section 3.1 presents the reference frames and transformations required for the multi-target localization system. Section 3.2 presents our multi-target geo-location model. Section 4 presents the methods to improve the accuracy of multi-target localization. Section 4.1 presents the distortion correction method. Section 4.2 presents the RLS filter method. Section 5 presents the results of multi-target localization for aerial imaged captured from a flight test and evaluates their accuracy. Section 6 presents the conclusions.

2. Overall Framework

The real-time multi-target geo-location algorithm in this paper is programmed and implemented on a multi-target geo-location and tracking circuit board (model: THX-IMAGE-PROC-02, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, see Figure 1a) with the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA) @ 720 MHz Clock Rate and 32 Bit Instructions/Cycle and 1 GB double data rate synchronous dynamic random access memory (DDR SDRAM). This circuit board also performs the proposed zoom lens distortion correction and the RLS filtering in real-time. The multi-target geo-location and tracking circuit board is mounted on an electro-optical stabilized imaging system (see Figure 1b). This aerial electro-optical stabilized imaging system consists of a visible-light camera, a laser range finder, an inertial measurement unit (IMU), a global positioning system (GPS), and a photoelectric encoder. They are in the same gimbal so that they rotate altogether in the same direction in any axis.
The electro-optical stabilized imaging system is mounted on the UAV (model: Changguang 1, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China) to stabilize the videos and any eliminate video jitters caused by the UAV therefore greatly reducing the impact of external factors.
The UAV system incorporates the electro-optical stabilized imaging system, UAV, data transmission module and ground station, which is shown in Figure 2. In the traditional target geo-location algorithms [1,2,3,4,5,6,7,8,9,10,11,12], the image and UAV attitude information are transmitted to a ground station. The target geo-location is calculated on a computer in the ground station. However, the date transmission model sends data with divided time mode, so the image and UAV attitude information are respectively transmitted at different times from the UAV to the ground station, so it is not guaranteed that the image and UAV attitude information will be obtained at the same time in ground station. Therefore, the traditional target geo-location algorithm on the computer in the ground station has poor real-time ability and unreliable target geo-location accuracy.
To overcome the shortcomings of traditional target geo-location algorithms such as algorithm complexity, unreliable geo-location accuracy and poor real-time ability, in this paper, the target geo-location algorithm is implemented on a multi-target geo-location and tracking circuit board on the UAV in real-time. Real-time ability is very important for urgent response in applications such as military reconnaissance and disaster monitoring.
The overall framework of the multi-target geo-location method is shown in Figure 3. The detailed workflows of the abovementioned multi-target geo-location method will be introduced as follows: we use UAV to search for the ground targets, which are selected by an operator in the ground station. The coordinates of the multiple targets in the image are transmitted to the UAV through a data transmission model. Then, all the selected targets are tracked automatically by the multi-target geo-location and tracking circuit board using the improved tracking method based on [13]. The electro-optical stabilized imaging system locks the main target in the field of view (FOV) center. Other targets in the FOV are referred to as sub-targets. The electro-optical stabilized imaging system measures the distance between the main target and the UAV using a laser range finder.
In order to ensure that the image, UAV attitude information, electro-optical stabilized imaging system’s azimuth and elevation angle, laser range finder value, and camera focal length are obtained at the same time, the frame synchronization signal of the camera is used as the external trigger signal for data acquisition by the above sensors, so we don’t need to implement sensor data interpolation algorithms in the system except for the GPS data. The UAV coordinates interpolation algorithm is shown in Equations (35) and (36).
The multi-target geo-location and tracking circuit board computed the multi-target geo-location after lens distortion correction in real-time. Then, the board used the moving target detection algorithm [14,15,16,17,18] for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to improve the target geo-location accuracy. The multi-target geo-location results are superimposed on each frame in the UAV and downlinked to a portable image receiver and the ground station.
This research aims to address the issues of real-time multi-target localization in UAVs by developing a hybrid localization model. In detail, the proposed scheme integrates the following improvements:
(a)
The multi-target localization accuracy is improved due to the combination of the zoom lens distortion correction method and the RLS filtering method. A real-time zoom lens distortion correction method is implemented on the circuit board in real time. In this paper, we analyse the effect of lens distortion on target geo-location accuracy. Many electro-optical stabilized imaging systems are equipped with zoom lenses. The focal length of a zoom lens can be adjusted to track targets at different distances during the flight . The zoom lens distortion varies with changing focal length. Real-time distortion correction of a zoomable lens is impossible by using the calibration methods because the tedious calibration process has to be repeated again if the focal length is changed.
(b)
The target geo-location algorithm is implemented on a circuit board in real time. The size of the circuit board is very small, therefore, this circuit board can be applied to many kinds of small UAVs. The target geo-location algorithm has the following advantages: low computational complexity and good real-time performance. UAV can geo-locate targets without pre-existing geo-referenced imagery, terrain databases and the relative height between UAV and targets. UAV can geo-locate targets using the embedded hardware in real-time without orbiting the targets.
(c)
The multi-target geo-location and tracking circuit board use the moving target detection algorithm [14,15,16,17,18] for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to automatically improve the target geo-location accuracy.
(d)
The multi-target localization, target detection and tracking techniques are combined. Therefore, we can geo-locate multiple moving targets in real-time and obtain target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
The real output rate of the geo-location results is 25 Hz. The reasons are as follows:
(a)
The data acquisition frequency of all the sensors is 25 Hz: the visible light camera’s frame rate is 25 Hz. The frame synchronization signal of the camera is used as the external trigger signal for all sensors expect GPS (the UAV coordinates interpolation algorithm is shown in Equations (35) and (36)).
(b)
Lens distortion correction is implemented in real-time, and the output rate of target location results after the lens distortion correction is 25 Hz.
(c)
When it is necessary to locate a new stationary target, the RLS algorithm needs 3–5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, the output rate is 25 Hz). After 5 s, the geo-location errors of the target have converged to a stable value. We can obtain a more accurate location of this stationary target immediately (it is no longer necessary to run RLS). That output rate is 25 Hz, too.
Our geo-location algorithm can geo-locate at least 50 targets simultaneously. The reasons are as follows:
(a)
For a moving target we only use lens distortion correction to improve the target geo-location accuracy. This consumes 0.4 ms on average when calculating the geo-location of a single target and, at the same time, correcting zoom lens distortion (Section 5.4). It consumes 0.4 ms for tracking the multiple targets (Section 3.3). The image frame rate is 25 fps, so the duration of a frame is 40 ms, so our geo-location algorithm can geo-locate at least 50 targets simultaneously.
(b)
For a stationary target, only when it is necessary to locate a new stationary target, the RLS algorithm needs 3–5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, 50 targets can be located simultaneously). After 5 s, the geo-location errors of the target have converged to a stable value. We no longer need to run RLS, so our geo-location algorithm can geo-locate at least 50 targets simultaneously after lens distortion correction and RLS.
Therefore, this algorithm has great advantages in geo-location accuracy and real-time performance. The multiple target location method in this paper can be widely applied in many areas such as UAVs and robots.

3. Real-Time Target Geo-Location and Tracking System

3.1. Coordinate Frames and Transformation

Five coordinate frames (camera frame, body frame, vehicle frame, ECEF frame and geodetic frame) are used in this study. The relative relationships between the frames are shown in Figure 4. All coordinate frames follow a right-hand rule.

3.1.1. Camera Frame

The origin is the camera projection center. The x -axis x c is parallel to the horizontal column pixels’ direction in the CCD sensor (i.e., the u direction in Figure 4). The y -axis y c is parallel to the vertical row pixels’ direction in the CCD sensor (i.e., the v direction in Figure 4). The positive z -axis z c represents the optical axis of the camera.

3.1.2. Body Frame

The origin is the mass center of the attitude measuring system. The x -axis x b is the 0° direction of attitude measuring system. The y -axis y b is the 90° direction of attitude measuring system. The z -axis z b completes the right handed orthogonal axes set. The azimuth Θ , elevation angle ψ and distance λ 1 output by electro-optical stabilized imaging system are relative to this coordinate frame.

3.1.3. Vehicle Frame v

A north-east-down (NED) coordinate frame. The origin is the mass center of attitude measuring system. The aircraft yaw β , pitch ε and roll angle γ output by the attitude measuring system are relative to this coordinate frame.

3.1.4. ECEF Frame

The origin is Earth’s center of mass. The z -axis z e points to the Conventional Terrestrial Pole (CTP) defined by International Time Bureau (BIH) 1984.0, and the x -axis x e is directed to the intersection between prime meridian (defined in BIH1984.0) and CTP equator. The axes y e completes the right handed orthogonal axes set.

3.1.5. WGS-84 Geodetic Frame

The origin and three axes are the same as in the ECEF. Geodetic longitude L , geodetic latitude M and geodetic height H are used here to describe spatial positions, and the aircraft coordinates ( L 0 , M 0 , H 0 ) , output by GPS are relative to this coordinate frame.
The relation between camera frame and body frame is shown in Figure 4. Two steps are required. First, transformation from camera frame to intermediate frame i n t 1 : rotate 90° (elevation angle ψ ) along the y -axis y c . The next step is transformation from intermediate frame i n t 1 to body frame: rotate azimuth angle Θ along the z -axis z i n t 1 . In Figure 4a, ψ 1 represents 90°.
The relation between body frame and vehicle frame is shown in Figure 5. Three steps are required. First, transformation from the body frame to the intermediate frame m i d 1 : rotate roll angle γ along the x -axis x b . The next step is transformation from the intermediate frame m i d 1 to the intermediate frame m i d 2 : rotate pitch angle ε along the y -axis y m i d 1 . The final step is transformation from the intermediate frame m i d 2 to the vehicle frame: rotate yaw angle β along the z -axis z m i d 2 .
The relation between vehicle frame and earth centered earth fixed (ECEF) is shown in Figure 6.

3.2. Multi-Target Geo-Location Model

As shown in Figure 4a, the main target is at the camera field of view (FOV) center, whose homogeneous coordinates in the camera frame are [ x c , y c , z c , 1 ]   T = [ 0 ,   0 ,   λ 1 , 1 ] , Through the transformation among five coordinate frames ranging from camera frame to WGS-84 geodetic frame, the geographic coordinates of main target in the WGS-84 geodetic frame can be determined, as shown in Figure 7.
First, we calculate the coordinates of the main target in the ECEF:
[ x e y e z e 1 ] = [ c L 0 s M 0 s L 0 c L 0 c M 0 ( N + H 0 ) c M 0 c L 0 s L 0 s M 0 c L 0 c M 0 s L 0 ( N + H 0 ) c M 0 s L 0 c M 0 0 s M 0 ( N ( 1 e 2 ) + H 0 ) s M 0 0 0 0 1 ]   ×   [ c ε c β c γ s β + s γ s ε c β s γ s β + c γ s ε c β 0 c ε s β c γ c β + s γ s ε s β s γ c β + c γ s ε s β 0 s ε s γ c ε c γ c ε 0 0 0 0 1 ]   ×   [ c Θ s Ψ s Θ c Θ c Ψ 0 s Ψ s Θ c Θ s Θ s Ψ 0 c ψ 0 s Ψ 0 0 0 0 1 ]   ×   [ x c y c z c 1 ]
where c * = cos ( * ) , s * = sin ( * ) .
Then we derive the geodetic coordinates of main target from earth centered earth fixed-world geodetic system (ECEF-WGS) transformation equations [19]:
U = arctan a z e b x e 2 + y e 2
L = { arctan y e x e , w h e n   x e > 0 π 2 , w h e n   x e = 0 , y e > 0 π 2 , w h e n   x e = 0 , y e < 0 π + arctan y e x e , w h e n   x e < 0 , y e > 0 π + arctan y e x e , w h e n   x e < 0 , y e < 0
M = arctan z e + b e 2 sin 3 U x e 2 + y e 2 a e 2 cos 3 U
H = x e 2 + y e 2 cos M N
In Equations (1)–(5), the semi-major axis of ellipsoid is a = 6378137.0 m , the semi-minor axis of ellipsoid is b = 6356752.0 m , the first eccentricity of spheroid e = a 2 b 2 / a , the second eccentricity of spheroid is e = a 2 b 2 / b , and the radius of spheroid curvature in the prime vertical is N = a / 1 e 2 s i n 2 M .
The key of sub-target geo-location is to build a geometrical geo-location model. The coordinates ( x c , y c , z c ) of sub-targets in the camera frame are solved on the basis of their pixel coordinates, and then their geodesic coordinates are calculated in accordance with the coordinate transformation Equations (1)–(5), Suppose the ground area corresponding to a single image is flat and the relative altitudes between targets and electro-optical stabilized imaging system are the same. Based on the image forming principles for single-plane array charge coupled device (CCD) sensors, a multi-target geo-location model can be established, as shown in Figure 8.
Suppose that no image distortion exists, the image point T of main target P is at the image center, and the three points, namely projection center G , sub-target Q and its image point T , are on the same line. Then a pin-hole imaging model will be formed and the altitude of a target relative to electro-optical stabilized imaging system will be:
h = λ 1 cos α = λ 2 cos β
where: h is the relative altitude, λ 1 is the distance from electro-optical stabilized imaging system to main target, and λ 2 is the distance from electro-optical stabilized imaging system to a sub-target.
Suppose the line-of-sight (LOS) vectors of the main target P , the sub-target Q and the point K beneath the camera are s = G F , t = G T , j = G J respectively, α is the angle between s and j , and β is the angle between t and j , then [20]:
cos α = s j s   j
cos β = t j t j
F c is the basis vectors for camera frame in a 3-dimensional vector space, the coordinates of LOS vectors s and t in the camera frame are given by Equations (9) and (10):
s = F c T [ 0 0 f ]
t = F c T [ u u 0 v v 0 f ]
where f is the camera focal length, the unit is mm. The pixel coordinate of the point F is ( u 0 , v 0 ) . The pixel coordinates of the point T is ( u , v ) .
F v is the basis vectors for vehicle frame in a 3-dimensional vector space. In vehicle frame, the LOS vector j goes down axis z v , the coordinates of j in vehicle frame is given by Equation (11):
j = F v T j v = F v T [ 0 0 j v z ]
The coordinates of j in the camera frame are solved as:
j c = R c v j v = R c b R b v j v = [ c Θ c Ψ c Ψ s Θ s Ψ s Θ c Θ 0 c Θ s Ψ s Θ s Ψ c Ψ ] [ c ε c β c ε s β s ε c β s γ s ε c r s β c r c β + s r s ε s β c ε s γ s r s β + s r s β s ε c r s ε s β c β s γ c r c ε ] j v
where c * = cos ( * ) , s * = sin ( * ) .
R b v is the rotation matrix transformation from the vehicle frame to the body frame. R c b is the rotation matrix transformation from the body frame to the camera frame. R c v is the rotation matrix transformation from the vehicle frame to the camera frame.
σ is the angle between the z v axis of the vehicle frame and the z c axis of the camera frame. According to the geometric relationship in Figure 6, we obtain:
j = j v z
f = j v z cos σ
Using Euler parameters, or quaternions, we have the definition:
η = cos σ 2
It can also be shown that [21]:
η = ± 1 2 ( 1 + tr C c v ) 1 2
This may be manipulated into:
2 η 2 1 = tr C c v 1 2
Therefore:
j v z = f cos σ = f 2 cos 2 ( σ 2 ) 1 = f 2 η 2 1
and it follows that:
j v z = 2 f t r C c v 1
By substituting the j v z value into Equations (11) and (12), the coordinate j c of j in the camera frame can be obtained. Then j c is substituted into Equations (7) and (8), to obtain cos α and cos β . Finally, according to the known main target distance λ 1 and Equation (6), the relative altitude h and the sub-target distance λ 2 can be determined. Based on the sub-target distance λ 2 and the LOS vector t of the sub-target in the camera frame, the coordinates of the sub-target in this frame can be determined:
[ x c y c z c ] = λ 2 t t
Finally, the geodetic longitude L , the geodetic latitude M and geodetic height H of the sub-target can be calculated by substituting x c , y c and z c into Equations (1)–(5).

3.3. Targets Tracking

The operator selects multiple targets in the first image and then these targets are tracked using the tracking algorithm. In recent years, many excellent tracking algorithms were proposed [22,23,24,25,26,27,28,29,30,31]. Due to the limited hardware resources in the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA), the tracking algorithm for UAV applications must be simple as well as highly efficient to meet the performance demands of real-time multiple target tracking. We use a simple two stage method to improve the real-time performance of the correlation tracking algorithm described in [13]. The main improvements are as follows:
In the low resolution stage, we calculate the average of four adjacent pixels in the original image to generate a low resolution image, whose resolution is half that of the original image. The low resolution template is generated in the same way. The formula of the normalized cross correlation (NCC) algorithm is as follows:
R ( u , v ) = x = 1 m y = 1 m T ( x , y ) S ( x + u , y + v ) x = 1 m y = 1 m T 2 ( x , y ) x = 1 m y = 1 m S 2 ( x + u , y + v )
where the size of low resolution template T is m × m , the size of the low resolution search area S is n × n , ( u , v ) is the left corner point coordinate in the search area, 0 u   n m , 0 v   n m . T is moving on the S during the matching operation. When R ( u , v ) reaches the maximum value R ( u 0 , v 0 ) , the point ( u 0 , v 0 ) is the best matching point in the low resolution search area.
In the original stage, we only need to search a small area in the original image. The size of the template is 2   m × 2   m . The size of the small search area is ( 2   m + 2 ) × ( 2   m + 2 ) . The left corner point coordinate in the search area is ( 2 u 0 1 , 2 v 0 1 ) . Then, the best matching point in the original image can be calculated using the NCC algorithm. In our implementation, m is set to 28, n is set to 46.
In [13], the time of template image matching is implemented is 0.62 ms. After improvement, it consumes 0.4 ms on the multi-target localization circuit board. The improved method can meet the real-time requirements of multi-target tracking.

4. Methods to Improve the Accuracy of Multi-Target Localization

4.1. Distortion Correction

The above multi-target geo-location model is established under the assumption that image distortion does not exist. In fact, due to the lens design and manufacturing errors of imaging systems, the image will be distorted [32,33,34,35], so the projection rays between the image point and object point can't completely meet the requirement of linear propagation in the total field of view (TFOV) and instead, will bend to some extent. As shown in Figure 6, the image point of the main target moves from the ideal position T to a distorted point position T , and the three points, namely projection center G , image point T and object point Q , are not in a straight line. Thus it does not conform to the ideal pinhole imaging model. The calculation of target geo-location data based on the ideal pinhole imaging model will lead to a big error, so the lens distortion must be corrected. Distortion correction involves at first deriving the position T of the target in the ideal image from its image point T in the distorted image according to the distortion model of camera, and then to calculate the geodetic coordinates of the target by using the pixel coordinates of the ideal image point T .
Real-time distortion correction of a zoom lens is impossible by using the calibration methods because the tedious calibration process has to be repeated again if the focal length is changed. In this research, we divide the zoom lens distortion procedures into two steps: lens distortion parameter estimation in the laboratory and real-time zoom lens distortion correction on the UAV.

4.1.1. Lens Distortion Parameter Estimation in the Laboratory

Lens distortion parameter estimation is performed in the laboratory. We use UAV electro-optical stabilized imaging system to take images which contain a chessboard pattern in the laboratory. The lines in the chessboard pattern are straight in the real world, but the images generally contain curved lines caused by the lens distortion. We take lots of images with different focal lengths of a zoom lens. We use the images to construct the distortion parameter table, which is shown in Figure 3.
We extract the chessboard image edges using the Canny edge detector. The thresholds of the Canny edge detector are provided in terms of percentages of the gradient norm of the image.
For a zoom lens, the typical range of distortion coefficient k 1 is given by [ 1 D 2 ,   1 D 2 ], D is the diagonal of the image [36]. In pixel coordinates, the image size is w × h pixels, the distortion center is ( u 0 , v 0 ) , the image center is ( u c , v c ) , where u c = 0.5 w , v c = 0.5 h . The range of u 0 is
[ u c 0.05 w , u c + 0.05 w ] . The range of v 0 is [ v c 0.05 h , v c + 0.05 h ] [37,38].
We sampled N 2 samples of u 0 in the range of [ u c 0.05 w ,   u c + 0.05 w ] . We sampled N 3 samples of v 0 in the range of [ v c 0.05 h , v c + 0.05 h ] . We sample N 1 samples of k 1 in the range of [ 1 D 2 , 1 D 2 ] in each distortion center ( u 0 , v 0 ) , so N 1 × N 2 × N 3 possible distortion parameters are generated from these samples in a certain camera focal length. The distortion parameter ( k 1 i , u 0 j , v 0 p ) is shown in Equation (22) to Equation (24):
k 1 i = 1 D 2 + i × δ k 1
u 0 j = 0.45 w + j × δ u 0
v 0 p = 0.45 h + p × δ v 0
where i = 1 , 2 , , N 1 ; j = 1 , 2 , , N 2 ; p = 1 , 2 , , N 3 ; δ k 1 = 2 N 1 D 2 ; δ u 0 = 0.1 w N 2 ; δ v 0 = 0.1 h N 3 ;
For each distortion parameter ( k 1 i , u 0 j , v 0 p ) , the pixel coordinates of the corrected chessboard image’s edge points ( u n , v n ) are computed by using Equation (25) to Equation (28):
x d = ( u d u 0 ) d x
y d = ( v d v 0 ) d y
u n = u 0 + x d d x ( 1 + k 1 x d 2 + k 1 y d 2 )
v n = v 0 + y d d y ( 1 + k 1 x d 2 + k 1 y d 2 )
d x , d y are pixel size, which units are µm. The distortion center is ( u 0 , v 0 ) , and the unit is pixels. The pixel coordinates of the distorted image are ( u d , v d ) , in units of pixels. The pixel coordinates of the undistorted (corrected) image is ( u n , v n ) , and the units are pixels. ( x d , y d ) is the projection coordinates of a distorted point, which units are μ m .
The gradient direction α ( u n , v n ) of the corrected chessboard image’s edge points ( u n , v n ) are computed using Equation (29) to Equation (31):
G u = I v n , u n + 1 I v n , u n + I v n + 1 , u n + 1 I v n + 1 , u n 2
G v = I v n + 1 , u n I v n , u n + I v n + 1 , u n + 1 I v n , u n + 1 2
α ( u n , v n ) = arctan ( G v G u )
I is the chessboard image brightness value, G u , G v are the first-order derivatives of the corrected image’s edge points brightness.
We compute the Hough transform of the corrected chessboard image. The N strongest peaks in the Hough transform correspond to the most distinct lines. The distance between a line N q and the origin is d i s t ( q ) . The orientation of a line N q is β ( q ) , where q = 1,2,..., N .
If the angular difference between the edge point orientation α ( u n , v n ) and the line N q orientation β ( q ) is less than a certain threshold (in our implementation, it is set to 2°. This threshold can meet the distortion correction accuracy requirements. We compute the distance d q from edge point ( u n , v n ) to the line N q :
d q = | u n cos ( β ( q ) ) + v n sin ( β ( q ) ) d i s t ( q ) |
If d q is less than a certain threshold. In our implementation, it is set to 2 pixels. This threshold can meet the distortion correction accuracy requirements. The edge point ( u n , v n ) votes for the line N q , the votes of the edge point ( u n , v n ) is:
votes = 1 1 + d q
We compute the sum of all edge points votes. In this focal length, the best distortion parameters ( k 1 , u 0 , v 0 ) are obtained by maximizing the straightness measure function:
m a x { q = 1 N v o t e s ( d i s t ( q ) , β ( q ) , k 1 i , u 0 j , v 0 p ) }
where votes ( d i s t ( q ) , β ( q ) , k 1 i , u 0 i , v 0 p ) is the votes of the line N q in the corrected chessboard image using distortion parameter ( k 1 i , u 0 j , v 0 p ) .
We apply the above algorithm to calibrate the best distortion parameter ( k 1 , u 0 , v 0 ) with different lens focal lengths. Then, the best zoom lens distortion parameters ( k 1 , u 0 , v 0 ) in all focal lengths are gained through curve fitting using Matlab Tools. We store the distortion parameter talbe for all focal length in the flash chip on the multi-target geo-location and tracking circuit board.

4.1.2. Real-Time Lens Distortion Correction on the UAV

The zoom lens is connected to the potentiometer through the gears. The relationship between focal length and resistance has been calibrated in the laboratory. We can get the focal length by measuring the resistance value of the potentiometer in real-time. During the flight of the UAV, we use the focal length measuring sensor (model: S10HP-3 3-turn Potentiometer, SAKAE, Nagoya, Japan, resistance error: ±1%) to measure the camera focal length and we find the distortion parameter ( k 1 i , u 0 j , v 0 p ) in the flash chip on the multi-target localization circuit board. The pixel coordinates ( u n , v n ) of the corrected real-time image are computed using Equation (25) to Equation (28). We use ( u n , v n ) to calculate the geodetic coordinates of the targets.

4.2. RLS Filter

For stationary targets on the ground, the location result in different frames should be the same. Therefore, a popular technique to remove the estimation error is to use a recursive least squares (RLS) filter. The RLS filter minimizes the average squared error of the estimate. The RLS filter uses an algorithm that only requires a scalar division at each step, making the RLS filter suitable for real-time implementation, so we use RLS to reduce the standard deviation and improve the accuracy of multiple stationary target localization.
Suppose the original geo-location data of t images are x k ( k = 1 , 2 , , t ) . The RLS algorithm flowchart is shown in Figure 9. In Figure 9, I 1 X 1 is a 1 × 1 unit matrix. After the RLS filtration of the original data x k , the obtained data are X k ( k = 1 ,   2 ,   t ) , where X k can be longitude L , latitude M or geodetic height H .
The GPS data (coordinates of the UAV) refresh rate is 1 Hz, but the video frame rate is usually above 25 Hz. To raise the convergence rate of the RLS algorithm, when the UAV speed is known, the coordinates of the UAV at the corresponding time can be determined through dead reckoning. In the WGS-84 ECEF, the coordinates of the UAV are [39]:
x e = x e 0 + 0 n V x d t
y e = y e 0 + 0 n V y d t
where ( x e 0 , y e 0 ) are the coordinates at the initial time, and V x is UAV speed in direction X , V y is UAV speed in direction Y . The influence of the UAV geodetic coordinate position and speed on the reckoned coordinates of the UAV is analyzed as follows:
(1)
In the WGS-84 geodetic frame, the higher the latitude of the UAV is, the smaller the projection of 1° longitude onto the horizontal direction. Therefore, in the high latitude area, the measurement accuracy of GPS is high, and the accuracy of the reckoned coordinates is high.
(2)
The smaller the UAV speed is, the smaller the distance UAV moves in the same time interval, the higher accuracy of the reckoned coordinates is.
According to Equations (35) and (36), the error resulting from the updating rate of GPS data can be compensated to converge the RLS algorithm rapidly to a stable value. Therefore, we can geo-locate multiple stationary ground targets quickly and accurately.

5. Experiments and Discussion

The targets location data were obtained during a UAV flight in real time. The evaluation is based on UAV videos captured from a Changji highway from 9:40 to 11:10. The resolution of the videos is 1024 × 768 and the frame rate is 25 frames per second (fps).

5.1. The Zoom Lens Distortion Parameter Estimation Results

To evaluate the proposed lens distortion parameters estimation approach, we use a plane containing a chessboard pattern and a zoom lens camera which are shown in Figure 10. The size of the pattern is 450   mm × 450   mm . We take the images which contained the chessboard pattern for several focal lengths: f = 5 ,   8 ,   10 ,   15 ,   20 ,   25 ,   30 ,   40 ,   50 ,   60 ,   70 ,   80 ,   90 ,   100 . Then, we perform the lens distortion parameters estimation approach in Section 4.1 to estimate the lens distortion parameters.
The relationships between the distortion coefficient k 1 and the focal length f are shown in Figure 11a. The relationships between the distortion center ( u 0 , v 0 ) and the focal length f are shown in Figure 11b.
We fit the curve between the data. If 5.8   mm f 20   mm The relationships between the distortion coefficient k 1 and the focal length f is shown in Equation (37):
k 1 ( f ) = ρ × f 2 + σ × f + τ
where ρ = 1.369 × 10 10 , σ = 1 × 10 9 , τ = 1.108 × 10 7 .
If 20   mm f 100   mm . The relationships between the distortion coefficient k 1 and the focal length f is shown in Equation (38):
k 1 ( f ) = δ ( f + η ) + ψ
where δ = 5.245 × 10 5 , η = 2.768 , ψ = 2.564 × 10 8 .
We fit the curve between the data. The relationships between the distortion center ( u 0 , v 0 ) and the focal length f is shown in Equations (39) and (40):
u 0 ( f ) = λ 1 × f + μ 1
v 0 ( f ) = λ 2 × f + μ 2
where λ 1 = 0.5722 , μ 1 = 500.4904 , λ 2 = 0.2897 , μ 2 = 363.5341 .
Based on the above fitting formula, we calculate the zoom lens distortion parameter ( k 1 i , u 0 j , v 0 p ) in all focal length to construct the distortion parameter table in the laboratory. We store the distortion parameter table in the flash chip in multi-target localization circuit board.

5.2. Targets Location Experimental Design and Instrument Description

This test is divided into the following four parts:
(1)
Monte Carlo simulation analysis of multi-target geo-location error. Through this analysis, the expected error of multi-target geo-location can be determined.
(2)
Geo-location test of a single aerial image. We substitute an actual aerial image and its position/attitude data into multi-target geo-location program for target location resolution. We use a high-precision GPS receiver to measure the geo-location data of various ground targets as the nominal values for target geo-location. We compare the calculated values with these nominal values to obtain the multi-target geo-location accuracy of the image. We correct the geo-location error arising from lens distortion, and then compare the calculated geodesic coordinates of each target with its nominal geodesic coordinates to determine the multi-target geo-location accuracy after the distortion correction.
(3)
Geo-location test of multi-frame aerial images. For many stationary ground targets, we use the RLS algorithm to adaptively estimate the multi-frame image geo-location data and then by comparing the RLS filtration results with nominal values, we determine the target geo-location accuracy after the RLS filtration.
(4)
Real-time geo-location and tracking of multiple ground-based moving targets. We derive the motion trail of each target from the geo-location data and time interval of every image. Then we calculate the speed of each target.
Here, a GPS receiver of the Geo Explorer 3000 series is used for ground measurements. This instrument has 14 channels, including 12 L1 codes and carriers and two satellite-based augmentation systems (SBAS). It is integrated with real-time two-channel SBAS tracking technology, and supports real-time differential correction. It can achieve real-time sub-meter geo-location accuracy—An accuracy of 50 cm is available through Trimble Delta Phase postprocessing.

5.3. Test 1: Monte Carlo Analysis of Multi-Target Geo-Location Accuracy Error

Error analysis is an important step to judge if a geo-location method is good or not. It is very difficult to analyze the target geo-location error through complete differential based on the measurement equation of airborne electro-optical stabilized imaging system, so Monte Carlo analysis is introduced to analyze the multi-target geo-location error. The Monte Carlo method is based on the law of great number and the Bernoulli’s theory [40]. On the basis of this method, a model of multi-target geo-location error can be established:
[ Δ L Δ M Δ H ] T = F ( X ) F ( X Δ X )
where: ΔL, ΔM and ΔH are the geo-location errors of each target, and ΔX is geo-location parameter error.
We use five aerial images (32 targets) for multi-target geo-location and distortion correction test. (eight targets in the 1st image, six targets in the 2nd image, five targets in the 3rd image, five targets in the 4th image, and eight targets in the 5th image). We use eight targets in the 1st image in Test 1. The image size is 1024 pixels × 768 pixels, and the pixel size is 5.5 μm × 5.5 μm. The position/attitude data of electro-optical stabilized imaging system collected through GPS, attitude measurement and laser finding at the time of image shoot are shown in Table 1. The root-mean-square error (RMSE) of each parameter is determined in accordance with the maximum nominal error stipulated in the relevant measurement equipment specifications.
Based on the nominal values and errors of above parameters, the sample model for 10,000 random variable arrays can be established in the Matlab software. By using the Equations (1)–(20) and (41), the geodetic coordinate RMSE of every target in this test can be determined through the Monte Carlo method, as shown in Table 2.

5.4. Test 2: Multi-Target Geo-Location Using a Single Aerial Image with Distortion Correction

The data in Table 1 are substituted into Equations (1)–(20) to calculate the geodesic coordinates of each target in a single image, as shown in Table 3.
The calculated geodetic coordinates of each target in Table 3 are compared with its nominal geodetic coordinates measured by GPS receiver on the ground, in order to obtain its geo-location error in a single image, as shown in Table 4. It is found that the geo-location error of each target is within the expected error range through the comparison between the data in Table 4 and those in Table 2. The geodetic height geo-location errors of all the targets are about 18 m—that’s basically in line with the assumption in the Section 3.2 that “the ground area corresponding to a single image is flat and the relative altitudes between targets and electro-optical stabilized imaging system are the same”.
The latitude and longitude geo-location errors of sub-targets are bigger than those of main targets for the following reasons: (1) the error arising from the slope distance difference between a main target and a sub-target. The longer the slope distance of a target to the image border, the bigger the geo-location error [39]; (2) the coordinate transformation error caused by the attitude measurement error and the angle measurement error (measured by the electro-optical stabilized imaging system) during the calculation of altitude and distance of a sub-target relative to the electro-optical stabilized imaging system; (3) the pixel coordinate error of a sub-target found in target detection; and (4) the pixel coordinate error of a sub-target caused by image distortion.
The geo-location errors of a target can be reduced in three ways: by reduction in the flight height H 0 or an increase in the platform elevation angle Ψ (the horizontal forward direction is 0°) to shorten the slope distance, which needs to consider the flight conditions; the selection of a high-precision attitude measuring system and the improvement in angle measurement accuracy of the electro-optical stabilized imaging system, for which one needs to consider the hardware cost; and the distortion correction. Therefore, the influence of distortion correction on multi-target geo-location accuracy will be mainly discussed.
Sub-target pixel coordinate error is mainly caused by image distortion. The correction method is discussed in Section 4.1 and Section 5.1. We calculate the corrected pixel coordinates using Equation (25) to Equation (28). Then, we use corrected pixel coordinates to calculate geodetic coordinates of target. Geo-location errors of multi-target after distortion correction are shown in Table 5.
It can be seen through the comparison between the data in Table 5 and those in Table 4 that, after the distortion correction, the latitude and longitude errors of each target are generally smaller than those before the correction, while the geo-location error of geodetic height remains basically unchanged. For the sub-targets farther from the image center, a more significant reduction in longitude and latitude errors can be obtained. Therefore, lens distortion correction can improve the geo-location accuracy of sub-targets and thus raise the overall accuracy of multi-target geo-location.
Target geo-location accuracy and missile hit accuracy are usually evaluated through the circular error probability (CEP) [41]. CEP is defined as the radius of a circle with the target point as its center and with the hit probability of 50%. In ECEF frame, the geo-location error along X direction is x . The geo-location error along Y direction is y . x and y can be calculated from longitude error and latitude error listed in Table 4 and Table 5. Suppose both x and y are subject to the normal distribution, the joint probability density function of ( x , y ) can be expressed as:
f ( x , y ) = 1 2 π σ x σ y 1 ρ 2 × exp { 1 2 ( 1 ρ 2 ) [ ( x μ x ) 2 σ x 2 2 ρ × ( x μ x ) × ( y μ y ) σ x σ y + ( y μ y ) 2 σ y 2 ] }
where μ x and μ y are the mean geo-location errors along X and Y directions, respectively; σ x and σ y are the standard deviations of the geo-location errors along X and Y directions, respectively; ρ is the correlation coefficient of geo-location errors along X and Y directions, 0 |   ρ | 1 .
Suppose x = r c o s θ , y = r s i n θ and r = x 2 + y 2 , the R satisfying the following equation will be CEP:
1 2 π σ x σ y 1 ρ 2 0 R 0 2 π r exp { 1 2 ( 1 ρ 2 ) [ ( r c θ μ x ) 2 σ x 2 2 ρ ( r c θ μ x ) ( r s θ μ y ) σ x σ y + ( r s θ μ y ) 2 σ y 2 ] } d r d θ = 0.5
where: c θ = cos θ , s θ = sin θ .
If the mean geo-location errors μ x , μ y are unknown, they can be substituted by the sample mean geo-location errors μ ^ x , μ ^ y respectively. If the standard deviations of the geo-location errors σ x , σ y are unknown, they can be substituted by the sample standard deviation of the geo-location errors σ ^ x , σ ^ y respectively. If the correlation coefficients of the geo-location errors ρ are unknown, they can be substituted by the sample correlation coefficients of the geo-location errors ρ ^ . If the total mean geo-location error μ r is unknown, it can be substituted by the total sample mean geo-location error μ ^ r . If the total standard deviation of the geo-location error σ r is unknown, it can be substituted by the total sample standard deviation of the geo-location errors σ ^ r .
Suppose the number of geo-location error samples is n : ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , … , ( x n , y n ) . The sample mean geo-location errors along X and Y directions are:
μ ^ x = 1 n i = 1 n x i
μ ^ y = 1 n i = 1 n y i
The total sample mean geo-location error is:
μ ^ r = 1 n i = 1 n r i
The sample standard deviations of the geo-location errors along X and Y directions are:
σ ^ x = 1 n 1 i = 1 n ( x i μ ^ x ) 2
σ ^ y = 1 n 1 i = 1 n ( y i μ ^ y ) 2
The total sample standard deviations of the geo-location errors is:
σ ^ r = 1 n 1 i = 1 n ( r i μ ^ r ) 2
The sample correlation coefficient of the geo-location errors is:
ρ ^ = i = 1 n [ ( x i μ ^ x ) × ( y i μ ^ y ) ] i = 1 n ( x i μ ^ x ) × i = 1 n ( y i μ ^ y )
When the number of geo-location error samples is more than 30, the confidence of CEP calculation result can reach 90% [41]. Therefore, through the multi-target geo-location and distortion correction test for five aerial images, 32 samples of target geo-location errors are obtained before and after the distortion correction, respectively, including eight in the 1st image, six in the 2nd image, five in the 3rd image, five in the 4th image, and eight in the 5th image. The normality test and independence test of sample data reveal that, the samples conform to normal distribution but are not independent (the sample correlation coefficient before the distortion correction is ρ ̂ 1 = 0.6618 , and the sample correlation coefficient after the distortion correction is ρ ̂ 2 = 0.5607 ).
Geo-location errors of the eight targets in the 1st image (which are parts of 32 targets) before the correction are shown in Table 4. Geo-location errors of the eight targets in the 1st image (which are parts of 32 targets) after the correction are shown in Table 5.
The multi-target geo-location errors before the distortion correction are processed through the Equations (43)–(50) to calculate the CEP, whereas the mean geo-location errors along the X and Y directions are 17.41 m and 20.34 m, respectively, the standard deviations of the geo-location errors along the X and Y directions are 7.77 m and 10.05 m, respectively, and the total mean geo-location error and total standard deviation are 28.98 m and 6.08 m, respectively. The calculation of Equation (46) through numerical integration finds that, among the 32 samples, 16 are in a circle with a radius of 28.74 m, which means the probability is 50%. The sizes of data samples inside and outside the solid circle in Figure 10 also show that, the CEP1 of multi-target geo-location before the distortion correction is 28.74 m. Then the multi-target geo-location errors after the distortion correction are processed through Equations (43)–(50) to calculate the CEP, where the mean geo-location errors along X and Y directions are 17.04 m and 18.63 m, respectively, the standard deviations of the geo-location errors along X and Y directions are 6.86 m and 8.25 m, respectively, and the total mean geo-location error and total standard deviation are 26.91 m and 5.31 m, respectively. The calculation of Equation (43) through numerical integration finds that, among the 32 samples, 16 are in a circle with a radius of 26.80 m, which means the probability is 50%. The sizes of samples inside and outside the dotted circle in Figure 10 also show that, the CEP2 of multi-target geo-location after the distortion correction is 26.80 m, 7% smaller than that before the distortion correction. Note: We use original geo-location error (32 samples scattered in the four quadrants) to calculate the CEP circle. Because there are 32 samples scattered in the four quadrants. It’s too scattered and not conducive to the analysis of the error. Therefore, we take the absolute value of each geo-location error, so all points are in the first quadrant in Figure 12.
In order to compare the performance of our multi-target geo-location algorithm with that of other algorithms, these localization accuracy results have been compared with the accuracies of geo-location methods reported in [7,8,9], as shown in the Table 6. It can be seen from the Table 6 that, the target geo-location accuracy in this paper is close to that reported in [7]. However, the geo-location accuracy in [7] depends on the distance between the projection centers of two consecutive images (namely baseline length). To obtain higher geo-location accuracy, the baseline length shall be longer, so in [7], the time interval between two consecutive images used for geo-location is quite big. Meanwhile, as the SIFT algorithm is needed to extract feature points from multi-frame images for the purpose of 3D reconstruction, the algorithm in [7] has a heavy calculation load in prejudice of real-time implementation. In [8], the geo-location accuracy is about 20 m before filtering. The real-time geo-location accuracy of the two methods is almost the same. However, our UAV flight altitude is much higher than that in [8]. In [9], geo-location accuracy is about 39.1 m before compensation. Our real-time geo-location accuracy is higher than in [9]. In [9], after the UAV flies many around in an orbit, the geo-location accuracy can be increased to 8.58 m. However, this accuracy cannot be obtained in real-time. The algorithms in [7,8,9] are implemented on a computer in the ground station. Since the images are transmitted to, and processed on a computer in the ground station, a delay occurs between the data capture moment and the time of completion of processing. In comparison, the geo-location algorithm in this paper is programmed and implemented on multi-target localization circuit board (model: THX-IMAGE-PROC-02, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China) with a TMS320DM642@ 720-M Hz Clock Rate and 32-Bit Instructions/Cycle and 1 GB DDR. It consumes 0.4 ms on average when calculating the geo-location of a single target and, at the same time, correcting zoom lens distortion. Therefore, this algorithm has great advantages in both geo-location accuracy and real-time performance. The multi-target location method in this paper can be widely applied in many areas such as UAVs and robots.

5.5. Test 3: RLS Filter for Geo-Location Data of Multiple Stationary Ground Targets

The targets in the above tests are all stationary ground targets. After lens distortion correction, we use the 1st aerial images (eight targets) as the initial frame for target tracking. For 150 frames starting from the 1st image, eight targets are tracked, respectively. The coordinates of the UAV are calculated using Equations (25) and (26). The update rate of UAV coordinate is synchronized with the camera frame rate. The geo-location data of each target are adaptively estimated by RLS algorithm respectively. The geo-location results of eight targets before and after RLS filtration are shown in Figure 13 (the dots “•” in different colors in Figure 13 represent original geo-location data of the eight targets, while □, ◁, ▷, ☆, ▽, ○, ◇ and △ represent the geo-location data of main target and sub-targets 1, 2, …, 7 after RLS filtration). It can be observed from Figure 13 that, after RLS filtration, the dispersion of geo-location data decreases sharply for each target, converging rapidly to a small area adjacent to the true position of the target. Figure 14 shows how the plane geo-location error of sub-target 2 changes with the number of image frames (the corresponding time) in the RLS filtration process. It can be seen from Figure 14 that after filtering the geo-location data of 100~150 images (the corresponding time is 3~5 s), the geo-location errors of the target have converged to a stable value. So, we can obtain a more accurate stationary target location immediately after 150 images (it is no longer necessary to run RLS).
By comparing the results after the stabilization of RLS filtration with the nominal geodetic coordinates of each target, the geo-location errors of geodetic coordinates of the targets after RLS filtration can be determined, as shown in Table 7.
It can be seen from the comparison of results between the data in Table 7 and those in Table 5 that, after RLS filtration, the longitude and latitude errors of each target are much smaller than the multi-target geo-location errors of a single image which is only processed by lens distortion correction. The geo-location error of geodetic height also decreases slightly.
After lens distortion correction, we use the other four aerial images (24 targets) as the initial frame for targets tracking, respectively (eight targets in the 1st image, six targets in the 2nd image, five targets in the 3rd image, five targets in the 4th image, and eight targets in the 5th image).
For 150 frames starting from the 2nd image, six targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 3rd image, five targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 4th image, five targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 5th image, eight targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively.
The multi-target geo-location data after RLS filtration are processed through the Equations (43)–(50) to calculate the CEP, where the mean geo-location errors along the X and Y directions are 13.78 m and 14.53 m, respectively, and the standard deviations of the geo-location errors along the X and Y directions are 5.79 m and 7.34 m, respectively. Among the 32 samples obtained through numerical integration of Equation (43), 17 are in a circle with a radius of 21.52 m, which means the probability is 53%. The sizes of samples inside and outside the dotted circle in Figure 13 also show that, the CEP3 of multi-target geo-location after RLS filtration is 21.52 m, 25% smaller than the CEP1 of multi-target geo-location of a single image. Note: We use original geo-location error (32 samples scattered in the four quadrants) to calculate the CEP circle. Because there are 32 samples scattered in the four quadrants. It’s too scattered and not conducive to the analysis of the error. Therefore, we take the absolute value of each geo-location error, so all points are in the first quadrant in Figure 15.

5.6. Test 4: Real-Time Geo-Location and Tracking of Multiple Moving Ground Targets

This test localizes and tracks four targets moving on a Changji highway in the video taken by the UAV, as shown in Figure 16 (part of the image). The size of every image is 1024   pixels × 768   pixels , the pixel size is 5.5   μ m × 5.5   µ m , and the focal length f = 73.6   mm . The target in the image center is chosen as main target, and three targets in other positions are chosen as sub-targets. The pixel coordinates of each target in the 1st image and the locations and attitudes data of the electro-optical stabilized imaging system at the corresponding time are shown in Table 8.
Nine chronological images are selected from video images to calculate the geo-location data of each target in every image. The resultant spatial position distribution of all the targets is shown in Figure 17a. With the main target position in the 1st image as the origin of coordinates, the spatial positions of all the targets are projected earthwards to obtain their planar motion trails, as shown in Figure 17b. The time span of those nine frames is 30 s.
Since both the UAV yaw and the electro-optical stabilized imaging system azimuth are not 0 in general, aerial images have been rotated and distorted somewhat. The aerial orthographic projection of the abovementioned highway, as shown in Figure 18, is acquired from Google Maps. It can be known from Figure 18 that, the actual direction of that highway is northwest–southeast. In China, cars drive on the right side of the road, so the house in Figure 18 is near the highway which is from northwest to southeast direction. In Figure 16b, we can see this house, so all the cars are on the road travelling in a northwest to southeast direction. This coincides with the localization results of Figure 17.
On the basis of Figure 17, the motion of each target has been analyzed as below: the targets in the 1st image, according to their positions from front to back, are sub-target 3, sub-target 2, sub-target 1 and main target in succession; in the 3rd–4th images, the sub-target 2 begins to catch up with and overtake the sub-target 3; in the 4th–6th images, the sub-target 1 begins to catch up with and overtake the sub-target 3; in the 7th–9th images, the main target begins to catch up with and overtake the sub-target 3; at last, all the targets, according to their positions from front to back, are sub-target 2, sub-target 1, main target and sub-target 3 in succession. This coincides completely with the motion law of all the targets in the video image shown in Figure 14, demonstrating that this geo-location algorithm can correctly locate and track multiple moving targets. The speed of each target can also be determined. This test has further verified the correctness of our multi-target geo-location model.

6. Conclusions

In order to improve the reconnaissance efficiency of UAV electro-optical stabilized imaging systems, a multi-target localization system based on a UAV electro-optical stabilized imaging system is proposed. First, a target location model and the way to improve the accuracy of multi-target localization are studied. Then, the geodetic coordinates of multiple targets are calculated using homogeneous coordinate transformation. On the basis of this, two methods which can improve the precision of the target localization are proposed: (1) the lens distortion correction method based on the distortion ratio; (2) the RLS filtering method based on UAV dead reckoning. The localization error model is established using Monte Carlo theory. The analysis of the multiple target location algorithm is carried out. The range of the localization error is obtained. In actual flight, the UAV flight altitude is 1140 m . The multi-target localization results are in the range of allowable error. After we use lens distortion correction method in a single image, CEP of the multi-target localization is reduced by 7 % . The RLS algorithm can adaptively estimate the location data based on multi-frame images. Compared with multi-target localization based on the single frame image, CEP of the multi-target localization using RLS is reduced by 25%.
The average time to calculate the location data and distortion correction for a single target is 0.4 ms. The normal video rate is 25 fps, so the proposed localization algorithm can locate the 50 targets at the same time in real time. The proposed method significantly reduced the image data processing time, it is convenient to implement the multi-target localization by using other embedded system [42].
However, when a target is out of field of view and re-enters, the operator has to identify the target again. The future research will aim to address this problem. We will try to apply the tracking-learning-detection (TLD) [43] for automatic target detection when a target re-enters the field of view. Due to the difficulty of constructing TLD on the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA), we will leave all these problems for our future research.

Acknowledgments

We acknowledge the financial support that was given under the Project supported by the National Defense Pre-Research Foundation of China (Grant No. 402040203). We acknowledge Academic Editor for his careful revision of the languages and grammatical structures in this article.

Author Contributions

Xuan Wang, Qianfei Zhou and Jinghong Liu initiated the research and designed the experiments. Xuan Wang and Qianfei Zhou wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deming, R.W.; Perlovsky, L.I. Concurrent multi-target localization, data association, and navigation for a swarm of flying sensors. Inf. Fusion 2007, 8, 316–330. [Google Scholar] [CrossRef]
  2. Minaeian, S.; Liu, J.; Son, Y.-J. Vision-based target detection and localization via a team of cooperative UAV and UGVs. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 1005–1016. [Google Scholar] [CrossRef]
  3. Morbidi, F.; Mariottini, G.L. Active target tracking and cooperative localization for teams of aerial vehicles. IEEE Trans. Control Syst. Technol. 2013, 21, 1694–1707. [Google Scholar] [CrossRef]
  4. Qu, Y.; Wu, J.; Zhang, Y. Cooperative Localization Based on the Azimuth Angles among Multiple UAVs. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013; pp. 818–823.
  5. Kwon, H.; Pack, D.J. A robust mobile target localization method for cooperative unmanned aerial vehicles using sensor fusion qualitye. J. Intell. Robot. Syst. Theory Appl. 2012, 65, 479–493. [Google Scholar] [CrossRef]
  6. Yan, M.; Du, P.; Wang, H.L.; Gao, X.J.; Zhang, Z.; Liu, D. Ground multi-target positioning algorithm for airborne optoelectronic system. J. Appl. Opt. 2012, 33, 717–720. [Google Scholar]
  7. Han, K.; de Souza, G.N. Multiple Targets Geo-Location Using SIFT and Stereo Vision on Airborne Video Sequences. In Proceedings of the 2009 IEEE RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 5327–5332.
  8. Barber, D.B.; Redding, J.; McLain, T.W.; Beard, R.W.; Taylor, C. Vision-based target geo-location using a fixed-wing miniature air vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
  9. Subong, S.; Bhoram, L.; Jihoon, K. Vision-based real-time target localization for single-antenna GPS-guided UAV. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1391–1401. [Google Scholar]
  10. Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D.; Ghabcheloo, R. Vision-Based Tracking and Motion Estimation for Moving Targets Using Small UAVS. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006.
  11. Yap, K.C. Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter. Master’s Thesis, Naval Postgraduate School, Monterey, CA, USA, December 2006. [Google Scholar]
  12. Kumar, R.; Sawhney, H.; Asmuth, J.; Pope, A.; Hsu, S. Registration of Video to Geo-Referenced Imagery. In Proceedings of the Fourteenth International Conference on Pattern Recognition, Queensland, Australia, 20 August 1998; pp. 1393–1400.
  13. Huang, D.; Wu, Z. The Application of TMS320c64x DSP Assembly Language in Correlation Tracking Algorithms. In Proceedings of the 2010 3rd International Congress on Image and Signal, Yantai, China, 16–18 October 2010; Volume 4, pp. 1529–1532.
  14. Zhang, Y.; Tong, X.; Yang, T.; Ma, W. Multi-model estimation based moving object detection for aerial video. Sensors 2015, 15, 8214–8231. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, Z.; Xu, J.; Huang, Z.; Zhang, X.; Xia, X.-G.; Long, T.; Bao, Q. Road-aided ground slowly moving target 2D motion estimation for single-channel synthetic aperture radar. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  16. Danescu, R.; Oniga, F.; Turcu, V.; Cristea, O. Long baseline stereovision for automatic detection and ranging of moving objects in the night sky. Sensors 2012, 12, 12940–12963. [Google Scholar] [CrossRef] [PubMed]
  17. Steen, K.A.; Villa-Henriksen, A.; Therkildsen, O.R.; Green, O. Automatic detection of animals in mowing operations using thermal cameras. Sensors 2012, 12, 7587–7597. [Google Scholar] [CrossRef] [PubMed]
  18. Rodriguez-Gomez, R.; Fernandez-Sanchez, E.J.; Diaz, J.; Ros, E. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model. Sensors 2012, 12, 585–611. [Google Scholar] [CrossRef] [PubMed]
  19. Bowring, B.R. Transformation from spatial to geographical coordinates. Surv. Rev. 1976, 23, 323–327. [Google Scholar] [CrossRef]
  20. Li, Y.; Yun, J.; Jun, W. Building of rigorous geometric processing model based on line-of-sight vector of ZY-3 imagery. Geomat. Inf. Sci. Wuhan Univ. 2013, 38, 1451–1455. [Google Scholar]
  21. Hughes, P.C. Spacecraft Attitude Dynamics; Courier Corporation: North Chelmsford, MA, USA, 2004. [Google Scholar]
  22. Fu, C.; Duan, R.; Kircali, D.; Kayacan, E. Onboard robust visual tracking for UAVs using a reliable global-local object model. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, Z.; Wang, Z.; Xu, M. Cubature information SMC-PHD for multi-target tracking. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  24. Perlovsky, L.I.; Deming, R.W. Maximum likelihood joint tracking and association in strong clutter. Int. J. Adv. Robot. Syst. 2013, 10, 1–9. [Google Scholar]
  25. Tian, W.; Wang, Y. Analytic performance prediction of track-to-track association with biased data in multi-sensor multi-target tracking scenarios. Sensors 2013, 13, 12244–12265. [Google Scholar] [CrossRef] [PubMed]
  26. Ghirmai, T. Distributed particle filter for target tracking: With reduced sensor communications. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  27. Kyristsis, S.; Antonopoulos, A.; Chanialakis, T.; Stefanakis, E.; Linardos, C.; Tripolitsiotis, A.; Partsinevelos, P. Towards autonomous modular UAV missions: The detection, geo-location and landing paradigm. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  28. Enayet, A.; Razzaque, M.A.; Hassan, M.M.; Almogren, A.; Alamri, A. Moving target tracking through distributed clustering in directional sensor networks. Sensors 2014, 14, 24381–24407. [Google Scholar] [CrossRef] [PubMed]
  29. Boudriga, N.; Hamdi, M.; Iyengar, S. Coverage assessment and target tracking in 3D domains. Sensors 2011, 11, 9904–9927. [Google Scholar] [CrossRef] [PubMed]
  30. Dellen, B.; Erdal Aksoy, E.; Wörgötter, F. Segment tracking via a spatiotemporal linking process including feedback stabilization in an n-D lattice model. Sensors 2009, 9, 9355–9379. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Sun, B.; Jiang, C.; Li, M. Fuzzy neural network-based interacting multiple model for multi-node target tracking algorithm. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  32. Drap, P.; Lefèvre, J. An exact formula for calculating inverse radial lens distortions. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  33. Wackrow, R.; Ferreira, E.; Chandler, J.; Shiono, K. Camera calibration for water-biota research: The projected area of vegetation. Sensors 2015, 15, 30261–30269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Sanz-Ablanedo, E.; Rodríguez-Pérez, J.R.; Armesto, J.; Taboada, M.F.Á. Geometric stability and lens decentering in compact digital cameras. Sensors 2010, 10, 1553–1572. [Google Scholar] [CrossRef] [PubMed]
  35. Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional underwater camera design and calibration. Sensors 2015, 15, 6033–6065. [Google Scholar] [CrossRef] [PubMed]
  36. Miklavcic, S.; Cai, J. Automatic curve selection for lens distortion correction using Hough transform energy. IEEE Workshop Appl. Comput. Vis. 2013, 26, 455–460. [Google Scholar]
  37. Goljan, M.; Fridrich, J. Estimation of lens distortion correction from single images. Proc. SPIE 2014, 9028. [Google Scholar] [CrossRef]
  38. Lee, T.Y.; Chang, T.S.; Wei, C.H.; Lai, S.H.; Liu, K.C.; Wu, H.S. Automatic distortion correction of endoscopic images captured with wide-angle zoom lens. IEEE Trans. Biomed. Eng. 2013, 60, 2603–2613. [Google Scholar] [PubMed]
  39. Liu, J.; Sun, H.; Zhang, B.; Dai, M.; Jia, P.; Shen, H.; Zhang, L. Target self-determination orientation based on aerial photoelectric imaging platform. Opt. Precis. Eng. 2007, 15, 1305–1309. [Google Scholar]
  40. Sheng, W.; Long, Y.; Zhou, Y. Analysis of target location accuracy in space-based optical-sensor network. Acta Opt. Sin. 2011, 31, 0228001. [Google Scholar] [CrossRef]
  41. Le, Z.; Wuzhou, L.; Yangfeng, J. Localization accuracy evaluation method based on CEP. Command Control Simul. 2013, 35, 111–114. [Google Scholar]
  42. Kuo, D.; Gordon, D. Real-time orthorectification by FPGA-based hardware acceleration. Proc. SPIE 2010, 7830. [Google Scholar] [CrossRef]
  43. Kalal, Z.; Mikolajczyk, K. Tracking-Learning-Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1409–1422. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. (a) Multi-target geo-location and tracking circuit board; (b) Electro-optical stabilized imaging system. The arrows in the Figure 1b represent the installation locations of main sensors in an electro-optical stabilized imaging system.
Figure 1. (a) Multi-target geo-location and tracking circuit board; (b) Electro-optical stabilized imaging system. The arrows in the Figure 1b represent the installation locations of main sensors in an electro-optical stabilized imaging system.
Sensors 17 00033 g001
Figure 2. UAV system architecture.
Figure 2. UAV system architecture.
Sensors 17 00033 g002
Figure 3. The overall framework of the multi-target geo-location method.
Figure 3. The overall framework of the multi-target geo-location method.
Sensors 17 00033 g003
Figure 4. The coordinate frames relation: Azimuth-elevation rotation sequence between camera and body frames. (a) camera frame; (b) body frame.
Figure 4. The coordinate frames relation: Azimuth-elevation rotation sequence between camera and body frames. (a) camera frame; (b) body frame.
Sensors 17 00033 g004
Figure 5. The coordinate frames relation: roll-pitch-yaw rotation sequence between body and vehicle frames. (a) body frame; (b) intermediate frame; (c) vehicle frame.
Figure 5. The coordinate frames relation: roll-pitch-yaw rotation sequence between body and vehicle frames. (a) body frame; (b) intermediate frame; (c) vehicle frame.
Sensors 17 00033 g005
Figure 6. The coordinate frames relation: Vehicle, ECEF and geodetic frames.
Figure 6. The coordinate frames relation: Vehicle, ECEF and geodetic frames.
Sensors 17 00033 g006
Figure 7. Coordinate transformation process of multi-target geo-location system.
Figure 7. Coordinate transformation process of multi-target geo-location system.
Sensors 17 00033 g007
Figure 8. The location of any target in image model.
Figure 8. The location of any target in image model.
Sensors 17 00033 g008
Figure 9. Flowchart of the RLS algorithm.
Figure 9. Flowchart of the RLS algorithm.
Sensors 17 00033 g009
Figure 10. (a) A plane containing a chessboard pattern; (b) The zoom lens camera (not yet mount in the electro-optical stabilized imaging system).
Figure 10. (a) A plane containing a chessboard pattern; (b) The zoom lens camera (not yet mount in the electro-optical stabilized imaging system).
Sensors 17 00033 g010
Figure 11. (a) The relationships between distortion coefficient k 1 and the focal length f ; (b) The relationships between the distortion center ( u 0 , v 0 ) and the focal length f .
Figure 11. (a) The relationships between distortion coefficient k 1 and the focal length f ; (b) The relationships between the distortion center ( u 0 , v 0 ) and the focal length f .
Sensors 17 00033 g011
Figure 12. Sample distribution of target location error before and after distortion correction.
Figure 12. Sample distribution of target location error before and after distortion correction.
Sensors 17 00033 g012
Figure 13. Localization results before and after RLS filtering.
Figure 13. Localization results before and after RLS filtering.
Sensors 17 00033 g013
Figure 14. Plane localization errors after RLS filtering.
Figure 14. Plane localization errors after RLS filtering.
Sensors 17 00033 g014
Figure 15. Sample distribution of target location before and after RLS.
Figure 15. Sample distribution of target location before and after RLS.
Sensors 17 00033 g015
Figure 16. Multiple moving targets in aerial video. (af): frame1, frame 3, frame 4, frame 6, frame 7, frame 9.
Figure 16. Multiple moving targets in aerial video. (af): frame1, frame 3, frame 4, frame 6, frame 7, frame 9.
Sensors 17 00033 g016
Figure 17. (a) The spatial localization of each target; (b) The target motion trajectory on the ground.
Figure 17. (a) The spatial localization of each target; (b) The target motion trajectory on the ground.
Sensors 17 00033 g017
Figure 18. Ortho image of the Changji highway in aerial imagery.
Figure 18. Ortho image of the Changji highway in aerial imagery.
Sensors 17 00033 g018
Table 1. Localization and attitude data of UAV electro-optical stabilized imaging system.
Table 1. Localization and attitude data of UAV electro-optical stabilized imaging system.
DesignationSymbolUnitNominal ValueError
UAV longitudeL0°112.6806492 × 10−4
UAV latitudeM0°35.1252251.5 × 10−4
UAV altitudeH0m114015
UAV pitch angleε°2.10.4
UAV roll angleγ°0.00.4
UAV yaw angleβ°290.51.5
UAV speedVm/s390.5
Electro-optical stabilized imaging system’s azimuth angleΘ°89.90.2
Electro-optical stabilized imaging system’s elevation angleΨ°−112.90.2
Laser range finderλ1m9655
Coordinates of sub-target(u,v)pixel(386,304), (352,379), (511,277), (379,524) (854,463), (756,584), (685,706)10
Camera’s focal lengthfmm50.00.2
Table 2. Errors of multi-target expected location.
Table 2. Errors of multi-target expected location.
Longitude RMSE/(°)Latitude RMSE/(°)Altitude RMSE/m
Main target0.000307°0.000166°25.33 m
Sub-target 10.000231°0.000245°25.33 m
Sub-target 20.000340°0.000199°26.12 m
Sub-target 30.000292°0.000265°25.33 m
Sub-target 40.000235°0.000338°25.33 m
Sub-target 50.000311°0.000345°24.56 m
Sub-target 60.000532°0.000166°26.13 m
Sub-target 70.000237°0.000383°25.34 m
Table 3. Calculated values in the geodesic coordinates of each target.
Table 3. Calculated values in the geodesic coordinates of each target.
Longitude/(°)Latitude/(°)Altitude/m
Main target112.67988235.125060171.81
Sub-target 1112.67905835.124549171.82
Sub-target 2112.68008035.123993171.81
Sub-target 3112.67913335.125134171.81
Sub-target 4112.67872335.124990171.82
Sub-target 5112.68086835.123948171.81
Sub-target 6112.67951035.123628171.82
Sub-target 7112.67827535.124593171.82
Table 4. Geo-location error of each target in a single image.
Table 4. Geo-location error of each target in a single image.
Longitude Error/°/(°)Latitude Error/°/(°)Altitude Error/m
Main target0.0002170.00002717.81
Sub-target 1−0.0000810.00018017.82
Sub-target 2−0.000261−0.00011118.81
Sub-target 30.0001940.00020717.81
Sub-target 40.0000900.00029417.82
Sub-target 5−0.000221−0.00030316.81
Sub-target 6−0.000485−0.00000718.82
Sub-target 7−0.0000950.00034417.82
Table 5. Geo-location errors of multi-target after distortion correction.
Table 5. Geo-location errors of multi-target after distortion correction.
Longitude Error/°Latitude Error/°Altitude Error/mPixel Coordinates/Pixel
Main target0.000217°0.000027°17.81 m(512.00, 383.99)
Sub-target 1−0.000110°0.000163°17.82 m(391.06, 306.08)
Sub-target 2−0.000259°−0.000067°18.81 m(360.11, 378.32)
Sub-target 30.000173°0.000207°17.81 m(511.86, 280.58)
Sub-target 40.000016°0.000285°17.82 m(386.18, 516.48)
Sub-target 5−0.000256°−0.000243°16.81 m(838.99, 458.38)
Sub-target 6−0.000439°0.000103°18.82 m(746.46, 574.63)
Sub-target 7−0.000117°0.000306°17.82 m(678.45, 691.36)
Table 6. Geo-location accuracy comparison between the proposed algorithm and the algorithms in reference [7].
Table 6. Geo-location accuracy comparison between the proposed algorithm and the algorithms in reference [7].
AlgorithmError Mean/mError Standard/mFlight Altitude/m
Reference [7]26.00 18.00 1over 1000 m 1
Reference [8]20.00 2-100 m–200 m 2
Reference [9]39.1 3-300 m 3
Proposed (before correction)28.986.081140
Proposed (after correction)26.915.311140
1 Data from Reference [7]; 2 Data from Reference [8]; 3 Data from Reference [9].
Table 7. Errors of multi-target location after RLS filtering.
Table 7. Errors of multi-target location after RLS filtering.
Longitude ErrorLatitude ErrorAltitude Error
Main target0.000167°0.000016°15.33 m
Sub-target 1−0.000005°0.000131°15.33 m
Sub-target 2−0.000193°−0.000086°16.08 m
Sub-target 30.000150°0.000151°15.33 m
Sub-target 40.000073°0.000217°15.33 m
Sub-target 5−0.000164°−0.000230°14.58 m
Sub-target 6−0.000361°−0.000007°16.08 m
Sub-target 7−0.000065°0.000255°15.33 m
Table 8. Locations and attitudes data of electro-optical stabilized imaging system.
Table 8. Locations and attitudes data of electro-optical stabilized imaging system.
DesignationSymbolUnitValue
UAV longitudeL0°120.906624
UAV latitudeM0°42.608521
UAV altitudeH0m2505
UAV pitch angleε°2.0
UAV roll angleγ°−1.8
UAV yaw angleβ°350.2
Electro-optical stabilized imaging system’s azimuth angleΘ°−117.8
Electro-optical stabilized imaging system’s elevation angleΨ°−46.7
Laser range finderλ1m3240
Coordinates of sub-target1(u,v)pixel(453, 342)
Coordinates of sub-target2(u,v)pixel(476, 251)
Coordinates of sub-target3(u,v)pixel(504, 213)
Camera’s focal lengthfmm73.6

Share and Cite

MDPI and ACS Style

Wang, X.; Liu, J.; Zhou, Q. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors 2017, 17, 33. https://doi.org/10.3390/s17010033

AMA Style

Wang X, Liu J, Zhou Q. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors. 2017; 17(1):33. https://doi.org/10.3390/s17010033

Chicago/Turabian Style

Wang, Xuan, Jinghong Liu, and Qianfei Zhou. 2017. "Real-Time Multi-Target Localization from Unmanned Aerial Vehicles" Sensors 17, no. 1: 33. https://doi.org/10.3390/s17010033

APA Style

Wang, X., Liu, J., & Zhou, Q. (2017). Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors, 17(1), 33. https://doi.org/10.3390/s17010033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop