Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. Monitoring respiration during neonatal sleep is notoriously difficult due to the nonstationary nature of the signals and the presence of spurious noise. Current approaches rely on the use of adhesive sensors, which can damage the fragile skin of premature infants. Recently, non-contact methods using low-cost RGB cameras have been proposed to acquire this vital sign from (a) motion or (b) photoplethysmographic signals extracted from the video recordings. Recent developments in deep learning have yielded robust methods for subject detection in video data. In the analysis described here, we present a novel technique for combining respiratory information from high-level visual descriptors provided by a multi-task convolutional neural network. Using blind source separation, we find the combination of signals which best suppresses pulse and motion distortions and subsequently use this to extract a respiratory signal. Evaluation results were obtained from recordings on 5 neonatal patients nursed in the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital, Oxford, UK. We compared respiratory rates derived from this fused breathing signal against those measured using the gold standard provided by the attending clinical staff. We show that respiratory rate (RR) be accurately estimated over the entire range of respiratory frequencies.

Original publication

DOI

10.1117/12.2290139

Type

Conference paper

Publication Date

01/01/2018

Volume

10501