Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding
@article{Tonekaboni2021UnsupervisedRL, title={Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding}, author={Sana Tonekaboni and Danny Eytan and Anna Goldenberg}, journal={ArXiv}, year={2021}, volume={abs/2106.00750}, url={https://api.semanticscholar.org/CorpusID:235293778} }
A self-supervised framework for learning generalizable representations for non-stationary time series, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal's generative process to define neighborhoods in time with stationary properties.
Topics
Temporal Neighborhood Coding (opens in a new tab)Temporal Neighborhood Coding (opens in a new tab)Debiased Contrastive Objective (opens in a new tab)Time-based Negative Sampling (opens in a new tab)Generative Process (opens in a new tab)Unsupervised Representation Learning (opens in a new tab)Clusters (opens in a new tab)Classification Task (opens in a new tab)Latent State (opens in a new tab)
239 Citations
Multi-Task Self-Supervised Time-Series Representation Learning
- 2024
Computer Science, Mathematics
This work proposes a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency, and investigates an uncertainty weighting approach to enable effective multi-task learning.
Unsupervised Representation Learning for Time Series: A Review
- 2023
Computer Science
This work conducts a comprehensive literature review of existing rapidly evolving unsupervised representation learning approaches for time series and develops a unified and standardized library, named ULTS, to facilitate fast implementations and unified evaluations on various models.
Contrastive Learning for Time Series on Dynamic Graphs
- 2022
Computer Science
This paper proposes a framework called GraphTNC for unsupervised learning of joint representations of the graph and the time-series using a contrastive learning strategy, and shows that it can prove beneficial for the classification task with real-world datasets.
Dynamic Contrastive Learning for Time Series Representation
- 2024
Computer Science
DynaCL is proposed, an unsupervised contrastive representation learning framework for time series that uses temporal adjacent steps to define positive pairs and demonstrates that DynaCL embeds instances from time series into semantically meaningful clusters, which allows superior performance on downstream tasks on a variety of public time series datasets.
Spatio-Temporal Consistency for Multivariate Time-Series Representation Learning
- 2024
Computer Science
A novel spatio-temporal contrastive representation learning method focusing on learning robust representations by encouraging spatio-temporal consistency, which comprehensively considers spatial information as well as temporal dependencies in MTS.
Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Time-Series Representation Learning
- 2022
Computer Science
This paper proposes a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs, and iteratively re-frames representations in a fusion-and-squeeze manner with Spectrum-to-Time (S2T) and Time- to-Spectrum (T2S) Aggregation modules.
Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion
- 2022
Computer Science
This paper proposes a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF), which firstly utilizes the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies and devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
Phase-driven Domain Generalizable Learning for Nonstationary Time Series
- 2024
Computer Science, Engineering
It is demonstrated that PhASER consistently outperforms the best baselines by an average of 5% and up to 13% in some cases, and its principles can be applied broadly to boost the generalization ability of existing time series classification models.
Capturing Temporal Components for Time Series Classification
- 2024
Computer Science
This work introduces a compositional representation learning approach trained on statistically coherent components extracted from sequential data based on a multi-scale change space, and demonstrates its effectiveness through extensive experiments on publicly available time series classification benchmarks.
Cross Reconstruction Transformer for Self-Supervised Time Series Representation Learning
- 2022
Computer Science
This paper aims at learning representations for time series from a new perspective and proposes Cross Reconstruction Transformer (CRT) to solve the aforementioned problems in a unified way and shows that CRT consistently achieves the best performance over existing methods by 2% ∼ 9%.
45 References
Learning Representations for Time Series Clustering
- 2019
Computer Science
A novel unsupervised temporal representation learning model, named Deep Temporal Clustering Representation (DTCR), is proposed, which integrates the temporal reconstruction and K-means objective into the seq2seq model, which leads to improved cluster structures and thus obtains cluster-specific temporal representations.
Unsupervised Scalable Representation Learning for Multivariate Time Series
- 2019
Computer Science, Mathematics
This paper combines an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.
Representation Learning with Contrastive Predictive Coding
- 2018
Computer Science, Mathematics
This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
TimeNet: Pre-trained deep recurrent neural network for time series classification
- 2017
Computer Science
TimeNet: a deep recurrent neural network trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously.
Similarity Preserving Representation Learning for Time Series Clustering
- 2019
Computer Science, Mathematics
An efficient representation learning framework that is able to convert a set of time series with various lengths to an instance-feature matrix that guarantees that the pairwise similarities between time series are well preserved after the transformation, thus the learned feature representation is particularly suitable for the time series clustering task.
Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA
- 2016
Computer Science
This work proposes a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data, and shows how TCL can be related to a nonlinear ICA model, when ICA is redefined to include temporal nonstationarities.
Improving Clinical Predictions through Unsupervised Time Series Representation Learning
- 2018
Computer Science, Medicine
This work experiments with using sequence-to-sequence (Seq2Seq) models in two different ways, as an autoencoder and as a forecaster, and shows that the best performance is achieved by a forecasting Seq2 Seq model with an integrated attention mechanism.
Representation Learning: A Review and New Perspectives
- 2013
Computer Science, Mathematics
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
A review of unsupervised feature learning and deep learning for time-series modeling
- 2014
Computer Science
Attentive State-Space Modeling of Disease Progression
- 2019
Medicine, Computer Science
The attentive state-space model is developed, a deep probabilistic model that learns accurate and interpretable structured representations for disease trajectories that demonstrates superior predictive accuracy and provides insights into the progression of chronic disease.