Data fusion for improved camera-based detection of respiration in neonates
Jorge J., Villarroel M., Chaichulee S., McCormick K., Tarassenko L.
© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. Monitoring respiration during neonatal sleep is notoriously difficult due to the nonstationary nature of the signals and the presence of spurious noise. Current approaches rely on the use of adhesive sensors, which can damage the fragile skin of premature infants. Recently, non-contact methods using low-cost RGB cameras have been proposed to acquire this vital sign from (a) motion or (b) photoplethysmographic signals extracted from the video recordings. Recent developments in deep learning have yielded robust methods for subject detection in video data. In the analysis described here, we present a novel technique for combining respiratory information from high-level visual descriptors provided by a multi-task convolutional neural network. Using blind source separation, we find the combination of signals which best suppresses pulse and motion distortions and subsequently use this to extract a respiratory signal. Evaluation results were obtained from recordings on 5 neonatal patients nursed in the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital, Oxford, UK. We compared respiratory rates derived from this fused breathing signal against those measured using the gold standard provided by the attending clinical staff. We show that respiratory rate (RR) be accurately estimated over the entire range of respiratory frequencies.