Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

@article{Zhang2022SelfSupervisedCP,
  title={Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency},
  author={Xiang Zhang and Ziyuan Zhao and Theodoros Tsiligkaridis and Marinka Zitnik},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.08496},
  url={https://api.semanticscholar.org/CorpusID:249848167}
}
A decomposable pre-training model is defined, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation.

Figures and Tables from this paper

TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling

TimeSiam is proposed as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks that consistently outperforms extensive advanced pre-training baselines, demonstrating superior forecasting and classification capabilities across 13 standard benchmarks in both intra- and cross-domain scenarios.

TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders

This work proposes TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks, and designs a decoupled autoencoder architecture, which learns the representations of visible (unmasked) positions and masked ones with two different encoder modules, respectively.

Multi-Patch Prediction: Adapting LLMs for Time Series Representation Learning

An innovative framework that adapts Large Language Models for time-series representation learning with a distinctive element of the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.

Large Pre-trained time series models for cross-domain Time series analysis tasks

This work proposes Large Pre-trained Time-series Models (LPTM), a novel method of adaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training that enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings.

Time and Frequency Synergy for Source-Free Time-Series Domain Adaptations

TFDA is developed with a dual branch network structure fully utilizing both time and frequency features in delivering final predictions, and induces pseudo-labels based on a neighborhood concept where predictions of a sample group are aggregated to generate reliable pseudo labels.

UniCL: A Universal Contrastive Learning Framework for Large Time Series Models

UniCL is introduced, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models across cross-domain datasets and a unified and trainable time-series augmentation operation to generate pattern-preserved, diverse, and low-bias time-series data by leveraging spectral information.

SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling

SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series.

Time-series representation learning via Time-Frequency Fusion Contrasting

Experimental results show that TF-FC significantly improves in recognition accuracy compared with other SOTA methods, and the proposed approach is able to extract more informative features from the data, enhancing the model's capacity to distinguish between different time series.

Towards Generalisable Time Series Understanding Across Domains

OTiS, an open model for general time series analysis, that has been specifically designed to handle multi-domain heterogeneity is introduced, including a novel pre-training paradigm including a tokeniser with learnable domain-specific signatures, a dual masking strategy to capture temporal causality, and a normalised cross-correlation loss to model long-range dependencies.

United We Pretrain, Divided We Fail! Representation Learning for Time Series by Pretraining on 75 Datasets at Once

This work introduces a new self-supervised contrastive pretraining approach to learn one encoding from many unlabeled and diverse time series datasets, so that the single learned representation can then be reused in several target domains for, say, classification.
...

Self-Supervised Pre-training for Time Series Classification

Empirical results show that the time series model augmented with the proposed self-supervised pretext tasks achieves state-of-the-art / highly competitive results.

Time-Series Representation Learning via Temporal and Contextual Contrasting

An unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data with high efficiency in few-labeled data and transfer learning scenarios is proposed.

Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion

This paper proposes a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF), which firstly utilizes the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies and devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.

CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting

This work argues that a more promising paradigm for time series forecasting, is to first learn disentangled feature representations, followed by a simple regression fine-tuning step, and proposes a new time series representation learning framework named CoST, which applies contrastive learning methods to learn disentangled seasonal-trend representations.

Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift

This paper proposes a novel supervised DA based on two steps that search for an optimal class-dependent transformation from the source to the target domain from a few samples and uses embedding similarity techniques to select the corresponding transformation at inference.

Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding

A self-supervised framework for learning generalizable representations for non-stationary time series, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal's generative process to define neighborhoods in time with stationary properties.

Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification

A novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics.

A Transformer-based Framework for Multivariate Time Series Representation Learning

A novel framework for multivariate time series representation learning based on the transformer encoder architecture, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples.

Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation

An Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA.
...