Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

BACKGROUND: Kaizen is a Japanese term for continuous improvement (kai ~ change, zen ~ good). In a kaizen task, a respondent makes sequential choices to improve an object's profile, revealing a preference path. Including kaizen tasks in a discrete choice experiment has the advantage of collecting greater preference evidence than pick-one tasks, such as paired comparisons. OBJECTIVE AND METHODS: So far, three online discrete choice experiments have included kaizen tasks: the 2020 US COVID-19 vaccination (CVP) study, the 2021 UK Children's Surgery Outcome Reporting (CSOR) study, and the 2023 US EQ-5D-Y-3L valuation (Y-3L) study. In this evidence synthesis, we describe the performance of the kaizen tasks in terms of response behaviors, conditional logit and Zermelo-Bradley-Terry (ZBT) estimates, and their standard errors in each of the surveys. RESULTS: Comparing the CVP and Y-3L, including hold-outs (i.e., attributes shared by all alternatives) seems to reduce positional behavior by half. The CVP tasks excluded multi-level improvements; therefore, we could not estimate logit main effects directly. In the CSOR, only 12 of the 21 logit estimates are significantly positive (p < 0.05), possibly due to the fixed attribute order. All Y-3L estimates are significantly positive, and their predictions are highly correlated (Pearson: logit 0.802, ZBT 0.882) and strongly agree (Lin: logit 0.744, ZBT 0.852) with the paired-comparison probabilities. CONCLUSIONS: These discrete choice experiments offer important lessons for future studies: (1) include warm-up tasks, hold-outs, and multi-level improvements; (2) randomize the attribute order (i.e., up-down) at the respondent level; and (3) recruit smaller samples of respondents than traditional discrete choice experiments with only pick-one tasks.

Original publication

DOI

10.1007/s40271-024-00708-4

Type

Journal article

Journal

Patient

Publication Date

20/07/2024