Skip to main content

About this Research Topic

Abstract Submission Deadline 30 January 2024
Manuscript Submission Deadline 27 February 2024

Time series data, sequential observations measured at successive points in time, permeate a wide variety of disciplines including finance, healthcare, engineering, and more. Traditional machine learning methods have largely depended on handcrafted features and domain-specific knowledge for time series analysis. However, as the data grows in volume and complexity, there's an increasing demand for models that can autonomously learn intricate patterns from the data without extensive human intervention. Self-supervised learning is a new paradigm that trains models to learn by predicting certain parts of the data from others, without relying on labeled examples. In recent years, self-supervised learning has demonstrated significant promise in computer vision and natural language processing. Its application to time series, though still nascent, holds immense potential for unlocking deeper insights and revolutionizing traditional methodologies.

The primary goal of this Research Topic is to explore, illuminate, and advance the frontier of self-supervised learning as it pertains to time series data. While the success stories of self-supervised learning in other domains are inspiring, the unique structure, dependencies, and challenges associated with time series require specialized attention. We aim to address several critical questions: How can self-supervised models best capture temporal dependencies and intricacies? What novel self-supervised tasks and architectures are apt for time series data? How can such models improve robustness and generalizability across varying time horizons and domains?

This Research Topic focuses on the burgeoning realm of self-supervised learning techniques tailored to time series data. We encourage submissions that encompass, but are not limited to, the following themes:

1. Novel architectures and self-supervised tasks specifically designed for time series

2. Mechanisms to address temporal dependencies, seasonality, and long-term patterns

3. Transfer learning and domain adaptation using self-supervised models in time series contexts

4. Evaluative studies comparing self-supervised, supervised, and unsupervised approaches on time series datasets

Keywords: Self-supervised learning, time series analysis, pretrain models


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Time series data, sequential observations measured at successive points in time, permeate a wide variety of disciplines including finance, healthcare, engineering, and more. Traditional machine learning methods have largely depended on handcrafted features and domain-specific knowledge for time series analysis. However, as the data grows in volume and complexity, there's an increasing demand for models that can autonomously learn intricate patterns from the data without extensive human intervention. Self-supervised learning is a new paradigm that trains models to learn by predicting certain parts of the data from others, without relying on labeled examples. In recent years, self-supervised learning has demonstrated significant promise in computer vision and natural language processing. Its application to time series, though still nascent, holds immense potential for unlocking deeper insights and revolutionizing traditional methodologies.

The primary goal of this Research Topic is to explore, illuminate, and advance the frontier of self-supervised learning as it pertains to time series data. While the success stories of self-supervised learning in other domains are inspiring, the unique structure, dependencies, and challenges associated with time series require specialized attention. We aim to address several critical questions: How can self-supervised models best capture temporal dependencies and intricacies? What novel self-supervised tasks and architectures are apt for time series data? How can such models improve robustness and generalizability across varying time horizons and domains?

This Research Topic focuses on the burgeoning realm of self-supervised learning techniques tailored to time series data. We encourage submissions that encompass, but are not limited to, the following themes:

1. Novel architectures and self-supervised tasks specifically designed for time series

2. Mechanisms to address temporal dependencies, seasonality, and long-term patterns

3. Transfer learning and domain adaptation using self-supervised models in time series contexts

4. Evaluative studies comparing self-supervised, supervised, and unsupervised approaches on time series datasets

Keywords: Self-supervised learning, time series analysis, pretrain models


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

total views

total views article views downloads topic views

}
 
Top countries
Top referring sites
Loading..

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.