Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence

Publication Year: 2020 Publication Type : JournalArticle


Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed scalogramsignal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs, such as electroencephalography, blood volume pulse, accelerometer, and WiFi channel state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary viewpoint (i.e., a scalogram generated with a wavelet transform) align with each other or not through optimizing a contrastive objective. We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fullysupervised networks, and it outperforms pre-training with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semi-supervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.


@article{DBLP:journals/corr/abs-2007-13018, archiveprefix = {arXiv},
    author = {Aaqib Saeed and Flora D. Salim and Tanir Özçelebi and Johan Lukkien},
    bibsource = {dblp computer science bibliography, https://dblp.org},
    biburl = {https://dblp.org/rec/journals/corr/abs-2007-13018.bib},
    eprint = {2007.13018},
    journal = {CoRR},
    timestamp = {Wed, 29 Jul 2020 01:00:00 +0200},
    title = {Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence},
    url = {https://arxiv.org/abs/2007.13018},
    volume = {abs/2007.13018},
    year = {2020}


Related Publications

RUP: Large Room Utilisation Prediction with carbon dioxide sensor
Type : JournalArticle
Show More
A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO 2 Sensor Data
Type : JournalArticle
Show More
Topical Event Detection on Twitter
Type : ConferenceProceeding
Show More

© 2021 Flora Salim - CRUISE Research Group.