Person Re-Identification Using Deep Modeling of Temporally Correlated Inertial Motion Patterns - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Feb 10;20(3):949.
doi: 10.3390/s20030949.

Person Re-Identification Using Deep Modeling of Temporally Correlated Inertial Motion Patterns

Affiliations

Person Re-Identification Using Deep Modeling of Temporally Correlated Inertial Motion Patterns

Imad Gohar et al. Sensors (Basel). .

Abstract

Person re-identification (re-ID) is among the essential components that play an integral role in constituting an automated surveillance environment. Majorly, the problem is tackled using data acquired from vision sensors using appearance-based features, which are strongly dependent on visual cues such as color, texture, etc., consequently limiting the precise re-identification of an individual. To overcome such strong dependence on visual features, many researchers have tackled the re-identification problem using human gait, which is believed to be unique and provide a distinctive biometric signature that is particularly suitable for re-ID in uncontrolled environments. However, image-based gait analysis often fails to extract quality measurements of an individual's motion patterns owing to problems related to variations in viewpoint, illumination (daylight), clothing, worn accessories, etc. To this end, in contrast to relying on image-based motion measurement, this paper demonstrates the potential to re-identify an individual using inertial measurements units (IMU) based on two common sensors, namely gyroscope and accelerometer. The experiment was carried out over data acquired using smartphones and wearable IMUs from a total of 86 randomly selected individuals including 49 males and 37 females between the ages of 17 and 72 years. The data signals were first segmented into single steps and strides, which were separately fed to train a sequential deep recurrent neural network to capture implicit arbitrary long-term temporal dependencies. The experimental setup was devised in a fashion to train the network on all the subjects using data related to half of the step and stride sequences only while the inference was performed on the remaining half for the purpose of re-identification. The obtained experimental results demonstrate the potential to reliably and accurately re-identify an individual based on one's inertial sensor data.

Keywords: deep learning; gait-based person re-ID; human re-identification; human-gait analysis; inertial sensors; inertial-based person re-identification.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Plot depicting the increasing trend of re-ID publications in top computer vision conferences including CVPR, ICCV and ECCV.
Figure 2
Figure 2
The sensors were attached tightly on the chest of the subject using elastic bands: (a) smartphone (MPU-6500) sensor; and (b) APDM Opal IMU.
Figure 3
Figure 3
Raw low-level 6D inertial signals captured during a gait trial: (a) 3D accelerations; and (b) 3D angular velocities.
Figure 4
Figure 4
GRU with update and reset gates; the update gate decides what information to keep and what to throw away while the reset gate decides which information to keep.
Figure 5
Figure 5
An overview of the proposed method. The gait pattern is captured using IMUs and raw signals from 6D components are used to train deep learning model.
Figure 6
Figure 6
The matching rate computed against step data and stride data are shown as CMCs. Only slight variations in the matching rate are observable.
Figure 7
Figure 7
Confusion matrices computed from the steps using hybrid data: (left) train/test split; and (right) 10-fold cross validation. The convention of notations used in the axes are (gender, age, experimental setup), e.g., (F, 22, A) means that the subject is a female who is 22 years old and the data were captured under experimental Setup A. The person re-identification accuracies remained higher for most of the subjects.
Figure 8
Figure 8
Confusion matrices computed from the strides using hybrid data: (left) train/test split; and (right) 10-fold cross validation. The convention of notations used in the axes are (gender, age, experimental setup), e.g., (M, 27, A) means that the subject is a male who is 27 years old and the data were captured under experimental Setup A. The person re-identification accuracies remained higher for most of the subjects.
Figure 9
Figure 9
The graph shows that, for both steps and strides, above 86% of the subjects were correctly re-identified in Rank-1, whereas above 98% of the subjects were correctly re-identified in Rank-5.
Figure 10
Figure 10
Confusion matrices computed from the steps using smartphone’s IMU data: (left) train/test split; and (right) 10-fold cross validation. The convention of notations used in the axes are (gender, age, experimental setup), e.g., (F, 22, A) means that the subject is a female who is 22 years old and the data were captured under experimental Setup A. The person re-identification accuracies remained higher for most of the subjects.
Figure 11
Figure 11
Confusion matrices computed from the strides using smartphone’s IMU data:(left) train/test split; and (right) 10-fold cross validation. The convention of notations used in the axes are (gender, age, experimental setup), e.g., (F, 22, A) means that the subject is a female who is 22 years old and the data were captured under experimental Setup A. The person re-identification accuracies remained higher for most of the subjects.
Figure 12
Figure 12
The combined curves for steps and strides is shown for comparison using wearable data.
Figure 13
Figure 13
Confusion matrices computed from the wearable IMU data for: steps (left); and strides (right). The dataset consists of 26 subjects and it was collected under Setup C. The person re-identification accuracies remained higher for most of the subjects.
Figure 14
Figure 14
Confusion matrices computed from the wearable IMU data with 10 fold cross validation for: steps (left); and strides (right). The dataset consists of 26 subjects and it was collected under Setup C. The person re-identification accuracies remained higher for most of the subjects.
Figure 15
Figure 15
Bar graph shows the effect of age-groups on: (a) step data; and (b) stride data. In general, higher classification accuracies were achieved with the stride data.
Figure 16
Figure 16
This bar graph shows the effect of gender on steps and strides for hybrid data.
Figure 17
Figure 17
(a) Scatter plot; and (b) radar graph show the performance of models trained and tested on our gait data. The results show higher accuracies for the stride dataset (200 timesteps) than the step dataset (100 timesteps).
Figure 18
Figure 18
Precision–recall graphs computed by four different deep models (CuDNNGRU, GRU, CuDNNLSTM, and LSTM) from: (a) step data; and (b) stride data. CuDNNGRU outperformed the rest of the models in all cases.
Figure 19
Figure 19
The effect of different hyperparameters including size of the network (specified by the number of neurons), drop-out rate, learning rate, and number of epochs are shown here. The optimal network parameters were 512 neurons trained for 30 epochs with a learning rate of 0.001 and a dropout value of 0.5.
Figure 20
Figure 20
The relationship between the network recognition effect and its training times from: (a) step data; and (b) stride data. The model was trained for 100 epochs.

Similar articles

Cited by

References

    1. Nambiar A., Bernardino A., Nascimento J.C. Gait-based Person Re-identification: A Survey. ACM Comput. Surv. (CSUR) 2019;52:33. doi: 10.1145/3243043. - DOI
    1. Zheng L., Yang Y., Hauptmann A.G. Person Re-identification: Past, Present and Future. arXiv. 20161610.02984
    1. Perwaiz N., Fraz M.M., Shahzad M. Person re-identification using hybrid representation reinforced by metric learning. IEEE Access. 2018;6:77334–77349. doi: 10.1109/ACCESS.2018.2882254. - DOI
    1. Sun Y., Xu Q., Li Y., Zhang C., Li Y., Wang S., Sun J. Perceive Where to Focus: Learning Visibility-aware Part-level Features for Partial Person Re-identification; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA. 16–20 June 2019; pp. 393–402.
    1. Liu J., Zha Z.J., Chen D., Hong R., Wang M. Adaptive Transfer Network for Cross-Domain Person Re-Identification; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 16–20 June 2019.

LinkOut - more resources