Project Description:
Tracking sleep stages can give valuable insights into a person’s physical health, mental performance, and overall well-being. However, current methods like polysomnography are expensive, uncomfortable, and usually limited to clinical settings. Our project focuses on developing a machine learning system for sleep stage classification using non-invasive wearable data. We used data from the DREAMT dataset, which includes blood volume pulse (BVP), photoplethysmography (PPG), accelerometry (ACC), electrodermal activity (EDA), and skin temperature (TEMP) signals collected via wristbands during overnight sleep studies. We designed and tested a decision tree, a neural network, and a recurrent neural network (RNN) to predict sleep stages from PPG-derived signals segmented into 30-second epochs. The neural network achieved the highest performance, but further work is ongoing to address class imbalance issues and improve rare-stage detection through oversampling. Future directions include testing on advanced architectures such as CNNs, LSTMs, and hybrid deep learning models.