TY - GEN
T1 - Investigating Self-supervised Learning for Predicting Stress and Stressors from Passive Sensing
AU - Haresamudram, Harish
AU - Suh, Jina
AU - Hernandez, Javier
AU - Butler, Jenna
AU - Chaudhry, Ahad
AU - Yang, Longqi
AU - Saha, Koustuv
AU - Czerwinski, Mary
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The application of machine learning (ML) techniques for well-being tasks has grown in popularity due to the abundance of passively-sensed data generated by devices. However, the performance of ML models are often limited by the cost associated with obtaining ground truth labels and the variability of well-being annotations. Self-supervised representations learned from large-scale unlabeled datasets have been shown to accelerate the training process, with subsequent fine-tuning to the specific downstream tasks with a relatively small set of annotations. In this paper, we investigate the potential and effectiveness of self-supervised pre-training for well-being tasks, specifically predicting both workplace daily stress as well as the most impactful stressors. Through a series of experiments, we find that self-supervised methods are effective when predicting on unseen users, relative to supervised baselines. Scaling both data size and encoder depth, we observe the superior performance obtained by self-supervised methods, further showcasing their utility for well-being applications. In addition, we present future research directions and insights for applying self-supervised representation learning on well-being tasks.
AB - The application of machine learning (ML) techniques for well-being tasks has grown in popularity due to the abundance of passively-sensed data generated by devices. However, the performance of ML models are often limited by the cost associated with obtaining ground truth labels and the variability of well-being annotations. Self-supervised representations learned from large-scale unlabeled datasets have been shown to accelerate the training process, with subsequent fine-tuning to the specific downstream tasks with a relatively small set of annotations. In this paper, we investigate the potential and effectiveness of self-supervised pre-training for well-being tasks, specifically predicting both workplace daily stress as well as the most impactful stressors. Through a series of experiments, we find that self-supervised methods are effective when predicting on unseen users, relative to supervised baselines. Scaling both data size and encoder depth, we observe the superior performance obtained by self-supervised methods, further showcasing their utility for well-being applications. In addition, we present future research directions and insights for applying self-supervised representation learning on well-being tasks.
KW - self-supervised learning
KW - well-being
KW - workplace stress prediction
UR - http://www.scopus.com/inward/record.url?scp=85184827689&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85184827689&partnerID=8YFLogxK
U2 - 10.1109/ACIIW59127.2023.10388191
DO - 10.1109/ACIIW59127.2023.10388191
M3 - Conference contribution
AN - SCOPUS:85184827689
T3 - 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2023
BT - 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2023
Y2 - 10 September 2023 through 13 September 2023
ER -