• Corpus ID: 59413908

Unsupervised Scalable Representation Learning for Multivariate Time Series

@article{Franceschi2019UnsupervisedSR,
  title={Unsupervised Scalable Representation Learning for Multivariate Time Series},
  author={Jean-Yves Franceschi and Aymeric Dieuleveut and Martin Jaggi},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.10738},
  url={https://api.semanticscholar.org/CorpusID:59413908}
}
This paper combines an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.

Figures and Tables from this paper

[Re] Unsupervised Scalable Representation Learning for Multivariate Time Series

An encoder architecture with dilated causal convolutional blocks is trained to minimize a triplet loss, and an SVM classifier is trained using the labels corresponding to these representations to evaluate the strength of the learned representations.

A Transformer-based Framework for Multivariate Time Series Representation Learning

A novel framework for multivariate time series representation learning based on the transformer encoder architecture, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples.

Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding

A self-supervised framework for learning generalizable representations for non-stationary time series, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal's generative process to define neighborhoods in time with stationary properties.

A Shapelet-based Framework for Unsupervised Multivariate Time Series Representation Learning

This work proposes a novel URL framework for multivariate time series by learning time-series-specific shapelet-based representation through a popular contrasting learning paradigm and demonstrates the superiority of the method against not only URL competitors, but also techniques specially designed for downstream tasks.

Multi-Task Self-Supervised Time-Series Representation Learning

This work proposes a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency, and investigates an uncertainty weighting approach to enable effective multi-task learning.

Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping

This work proposes a noncontrastive self-supervised learning (SSL) approach that efficiently captures low-and high-frequency features in a cost-effective manner and achieves state-of-art performance on all the considered datasets.

Improving Time Series Encoding with Noise-Aware Self-Supervised Learning and an Efficient Encoder

This work proposes an innovative training strategy that promotes consistent representation learning, accounting for the presence of noise-prone signals in natural time series, and proposes an encoder architecture that incorporates dilated convolution within the Inception block, resulting in a scalable and robust network with a wide receptive field.

Unsupervised Visual Time-Series Representation Learning and Clustering

A novel data transformation along with novel unsupervised learning regime is used to transfer the learning from other domains to time-series where the former have extensive models heavily trained on very large labelled datasets.

Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Time-Series Representation Learning

This paper proposes a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs, and iteratively re-frames representations in a fusion-and-squeeze manner with Spectrum-to-Time (S2T) and Time- to-Spectrum (T2S) Aggregation modules.

Unsupervised Multi-modal Feature Alignment for Time Series Representation Learning

This study introduces an innovative approach that focuses on aligning and binding time series representations encoded from different modalities, inspired by spectral graph theory, thereby guiding the neural encoder to uncover latent pattern associations among these multi-modal features.
...

[Re] Unsupervised Scalable Representation Learning for Multivariate Time Series

An encoder architecture with dilated causal convolutional blocks is trained to minimize a triplet loss, and an SVM classifier is trained using the labels corresponding to these representations to evaluate the strength of the learned representations.

TimeNet: Pre-trained deep recurrent neural network for time series classification

TimeNet: a deep recurrent neural network trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously.

Similarity Preserving Representation Learning for Time Series Analysis

This paper proposes an efficient representation learning framework that is able to convert a set of time series with equal or unequal lengths to a matrix format and guarantees that the pairwise similarities between time series are well preserved after the transformation.

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

A new way to overcome the non-differentiability in discrete representation learning is introduced and a gradient-based version of the traditional self-organizing map algorithm is presented that is more performant than the original.

Representation Learning with Contrastive Predictive Coding

This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

Time series classification from scratch with deep neural networks: A strong baseline

The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and the exploration of the very deep neural networks with the ResNet structure is also competitive.

Semi-supervised Triplet Loss Based Learning of Ambient Audio Embeddings

This paper combines unsupervised and supervised triplet loss based learning into a semi-supervised representation learning approach, whereby the positive samples for those triplets whose anchors are unlabeled are obtained either by applying a transformation to the anchor, or by selecting the nearest sample in the training set.

Multi-Scale Convolutional Neural Networks for Time Series Classification

A novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature extraction and classification in a single framework, leading to superior feature representation.

Deep learning for time series classification: a review

This article proposes the most exhaustive study of DNNs for TSC by training 8730 deep learning models on 97 time series datasets and provides an open source deep learning framework to the TSC community.

Learning to Linearize Under Uncertainty

This work suggests a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences by training a generative model to predict video frames.