Student Publications [Scholarly]

Self-Supervised Representation Learning for Human Activity Recognition Using Inertial Sensor Data

Document Type

Conference Proceeding

Abstract

This paper proposes a self-supervised learning framework for Human Activity Recognition (HAR) using inertial sensor data, aiming to reduce dependency on large labeled datasets. Leveraging contrastive learning and a hybrid encoder consisting of 1D convolutional layers and a bidirectional LSTM, the model learns meaningful representations from unlabeled time-series data through a pretext task of distinguishing between augmented views of the same sequence. Extensive experiments on the UCI HAR Dataset demonstrate that our approach achieves a test accuracy of 94.72%, surpassing several supervised baselines such as CNN (91.46%), LSTM (92.35%), and Transformer (93.24%)., and Transformer (93.24 %). The model also achieves a macro-averaged F1-score of 94.45% and AUROC of 97.36%, indicating strong class separability and generalization. Furthermore, it maintains robust performance under noisy and incomplete sensor inputs, with only minor degradations observed under Gaussian noise (F1-score drops to 89.41%) and random dropout (87.93%). These results highlight the effectiveness and scalability of the proposed method for real-world HAR applications, especially in scenarios with limited annotated data. © 2025 IEEE.

Publication Title

2025 IEEE International Conference on Quantum Photonics, Artificial Intelligence, and Networking, QPAIN 2025

Publication Date

2025

ISBN

9798331596934

DOI

10.1109/QPAIN66474.2025.11171865

Keywords

deep learning, human activity recognition, inertial sensors, representation learning, self-supervised learning, sensor data augmentation, time-series classification

Share

COinS