**3.3 Model observations**

The intention of model testing was to understand (A) ease of use of the model for those using the model and (B) to see how consistent the answers are between different users.

To answer the first question, the users were specifically asked about their experience. The model was rated to be very easy (1) or easy (2) and neutral by two raters.

The interrater consistency varied depending on both the attributes and the insurance option. For most questions, there was a clear preference for one score. The highest response variability was observed for option 1 in relation to "financial impact and funding for insurance," "primary care utilization," "ease of transition," and "acceptability to stakeholders." The responses for option 2 had less variability.

In order to improve the interrater variability, two improvements were suggested: (1) provide the raters with a more detailed description of the meaning of an attribute and a score and (2) provide more a more detailed description of the two insurance models. An additional step of deliberation where a discussion could be led between the raters could give the rational for their individual ratings and consensus could be developed in the process. The latter approach would help to



#### **Table 4.**

*Scoring of option 1 and option 2 in the MCDA model for the comparison of insurance models in China as rated by six test persons (four Chinese, two international).*

improve the cross-rater understanding as a part of the implementation and training process and enhance the consistent use of the model over time.

Two raters proposed to reduce the numbers of criteria for some of the options, because it may be difficult to determine the exact response if the gradual differences between the possible answers become too small.

We would propose when applying this model to a decision problem the interrater variability should be monitored throughout the introduction phases. The results of these evaluations will help to improve the model over time.
