Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Patient detection and skin segmentation are important steps in non-contact vital sign monitoring as skin regions contain pulsatile information required for the estimation of vital signs such as heart rate, respiratory rate and peripheral oxygen saturation (SpO 2 ). Previous methods based on face detection or colour-based image segmentation are less reliable in a hospital setting. In this paper, we develop a multi-task convolutional neural network (CNN) for detecting the presence of a patient and segmenting the patient’s skin regions. The multi-task model has a shared core network with two branches: a segmentation branch which was implemented using a fully convolutional network, and a classification branch which was implemented using global average pooling. The whole network was trained using images from a clinical study conducted in the neonatal intensive care unit (NICU) of the John Radcliffe hospital, Oxford, UK. Our model can produce accurate results and is robust to changes in different skin tones, pose variations, lighting variations, and routine interaction of clinical staff.

Type

Conference paper