Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Unsupervised domain adaptation (UDA) generally learns a mapping to align the distribution of the source domain and target domain. The learned mapping can boost the performance of the model on the target data, of which the labels are unavailable for model training. Previous UDA methods mainly focus on domain-invariant features (DIFs) without considering the domain-specific features (DSFs), which could be used as complementary information to constrain the model. In this work, we propose a new UDA framework for cross-modality image segmentation. The framework first disentangles each domain into the DIFs and DSFs. To enhance the representation of DIFs, self-attention modules are used in the encoder which allows attention-driven, long-range dependency modeling for image generation tasks. Furthermore, a zero loss is minimized to enforce the information of target (source) DSFs, contained in the source (target) images, to be as close to zero as possible. These features are then iteratively decoded and encoded twice to maintain the consistency of the anatomical structure. To improve the quality of the generated images and segmentation results, several discriminators are introduced for adversarial learning. Finally, with the source data and their DIFs, we train a segmentation network, which can be applicable to target images. We validated the proposed framework for cross-modality cardiac segmentation using two public datasets, and the results showed our method delivered promising performance and compared favorably to state-of-the-art approaches in terms of segmentation accuracies. The source code of this work will be released via https://zmiclab.github.io/projects.html, once this manuscript is accepted for publication.

Original publication

DOI

10.1016/j.media.2021.102078

Type

Journal article

Journal

Med Image Anal

Publication Date

07/2021

Volume

71

Keywords

Cardiac segmentation, Disentangle, Domain adaptation, Zero loss, Heart, Humans