**6. Conclusion**

by Bisson and Andrew [23]. In one of the conducted meta-analyses out of six studies that compared CBT and EMDR, three studies reported moderate to large superiority of CBT on clinician-rated PTSD scores, while the remaining three studies reported the exact opposite effect, namely moderate to large superiority of EMDR over CBT. Overall, the meta-analysis indicated no difference between the effects of the two treatments, but a large amount of between-study heterogeneity (ES = 0.03, *p* = 0.92, *τ*<sup>2</sup> = 0.28). Without further exploration of the

Several meta-analyses aimed at explaining such heterogeneity between individual study estimates in PTSD RCTs. Two meta-analyses [14, 34] included different types of PTSD treatments, but found no evidence for the type of psychotherapeutic PTSD treatment to explain between-study heterogeneity. Rather, Gerger et al. [14] found evidence for the presence of publication bias with respect to the trauma-focused PTSD treatments: a meta-analysis that was restricted to large-scale studies demonstrated considerably reduced treatment effects compared to the effects found in the overall analysis or in an analysis that was restricted to smallscale studies. The between-study heterogeneity, which was very large in the initial analysis

= 0.29), was considerably reduced in the analysis restricted to large-scale trials (*τ*<sup>2</sup>

One possible explanation for the striking differences in the direction of effects between two treatments as in the EMDR-CBT comparison by Bisson and Andrew [23] is the presence of researchers' preferences for one over the other treatment, the so-called researcher allegiance [50]. Accordingly, the intriguing pattern of results in the EMDR-CBT meta-analysis by Bisson and Andrew [23] could simply be explained by the fact that in one half of the studies researchers preferred CBT and in the other half researchers preferred EMDR. While, by chance, in this particular case, the distribution of researcher allegiance appears balanced across the six included studies, an unbalanced preference for one particular treatment may be more problematic. In fact, a meta-analysis on trauma-focused PTSD treatments found researcher allegiance to significantly correlate with effect-size differences between the trauma-focused PTSD treatments (*r* = 0.35) and to explain between-study heterogeneity [15]. Further, Munder and colleagues presented evidence for the assumption that the association between researcher allegiance and outcome was due to bias [16] and against the assumption that true differences in the effectiveness of different types of PTSD treatments explained the association between

Thus, meta-analyses on comparative PTSD RCTs failed to demonstrate the superiority of particular *characteristic* components, but demonstrated the relevance of researcher allegiance —a factor that is *incidental* to the treatment—in explaining differences between individual study results. Thus, in the case of PTSD outcome studies, comparative RCTs run a considerable risk of providing biased estimates of the contribution of *characteristic* treatment components to the entire treatment effect. Furthermore, while on first sight meta-analyses of placebocontrolled PTSD RCTs appeared to support the claim that focusing on the trauma is necessary for successful PTSD treatment, a closer examination of potential moderators of treatment effects indicated that a trauma focus might be necessary for some but not all patient samples. A thorough implementation of the assumed psychological placebo might further enhance its effectiveness and, hence, reduce the superiority of trauma-focused treatments over placebo

= 0.08).

observed heterogeneity, no valid conclusions may be drawn from such data [35].

238 A Multidimensional Approach to Post-Traumatic Stress Disorder - from Theory to Practice

(*τ*<sup>2</sup>

researcher allegiance and outcome [15].

Our analysis of PTSD outcome research demonstrates the presence of considerable conceptual problems in PTSD RCTs, which limit the validity of the conclusions that may be drawn from these studies when trying to identify the most beneficial treatment components. In placebocontrolled RCTs, an inappropriate implementation of the placebo control led to an overestimation of the superiority of the PTSD treatments over the placebo control, and in comparative RCTs, the presence of unbalanced researcher allegiance led to biased estimates of treatment effect differences. Besides such conceptual issues, which hamper valid conclusions from PTSD RCTs, the moderating role of patient characteristics confirms the recent conclusion that 'one size does not fit all' in PTSD treatment [49]. Thus, moderators of treatment effects in PTSD RCTs may include genuine diversity, which contributes to the *apples-and-oranges problem* and that indicates a need for differential treatments, but may also include factors that contribute to bias—that is the *garbage-in, garbage-out problem* as well as the *file-drawer problem*.

Future attempts to identify the most beneficial treatment components of PTSD treatments should therefore consider not only the theory-driven *characteristic* components but must also further investigate how the assumed *incidental* factors may impact on outcome in order to warrant the validity of conclusions from PTSD outcome research. The underlying etiological theories may need revisions if moderators indicated genuine diversity and study methodology may need to be adapted in order to ensure the validity of psychotherapy outcome studies. Neglecting extra-therapeutic moderators may threaten the validity of RCTs and meta-analyses and may result in misleading recommendations for researchers, practitioners and policymakers, who base their treatment decisions on empirical findings. On the other hand, the possibility of exploring sources of genuine diversity between RCTs when conducting a meta-analysis (i.e. conducting moderator analyses in order to explain between-study heterogeneity) may be seen as an important step towards personalized psychotherapy [51 ].

It is important to note, however, that moderator analyses in meta-analyses, even if they include only high-quality RCTs, should always be considered as retrospective and observational in nature, because the studies were not randomly assigned according to their characteristics (e.g. studies have neither been randomly assigned to being of high vs. low quality nor to having included patients with complex vs. non-complex problems). Thus, the results from moderator analyses in meta-analyses should be considered as hypothesis generating, which would, if possible, at best to be confirmed by high-quality experimental research.
