**3. Estimating the relevance of characteristic treatment components in metaanalyses of RCTs**

The RCT and meta-analyses of RCTs are considered the highest level of evidence for the efficacy of treatments [24], and different types of RCTs are employed in psychotherapy outcome research [8].

First, psychotherapeutic treatments are compared with an untreated control group, such as no-treatment or waiting list (WL) designs. The inclusion of an untreated control group in an RCT minimizes most threats to the internal validity (e.g. controls for spontaneous remission and regression to the mean). Therefore, such a design may be used for showing that a psychotherapeutic treatment is efficacious. With this study design, a number of meta-analyses demonstrated large effect sizes (ESs) for eye-movement desensitization and reprocessing (EMDR), cognitive treatments, exposure-based treatments and the combination of the latter to cognitive-behavioral treatments (CBT) (e.g. [13, 14, 23]), even though treatment effects may be overestimated in studies with small sample size [14]. However, with respect to the research questions highlighted in the present review, such design does not provide an answer: first, a larger effect size of an assumed study comparing treatment A vs. WL, as compared to a second study comparing treatment B vs. WL, may not be interpreted as superiority of A over B, if A has not been shown to be superior to B in a comparative RCT. A number of study characteristics, which may differentiate between the two assumed studies—A vs. WL and B vs. WL, such as different patient samples, different therapists, different study methodology and design might explain a larger effect in one comparison than in the other. Second, such design does also not tell which *characteristic* treatment components are critical for symptom improvement, because the amount of the total treatment effect that is due to the *characteristic* vs. *incidental* components cannot be disentangled. Thus, such design may not answer the question whether a trauma focus is necessary for the successful PTSD treatment.

In order to control for the *incidental* effects and to evaluate the impact of *characteristic* treatment components, in a second type of RCTs, psychotherapeutic treatments are compared with psychological placebos. Superiority of the psychotherapeutic treatment over the psychological placebo could be specifically attributed to the *characteristic* treatment components, which were lacking in the placebo control. Thus, by manipulating the presence of a particular component, the incremental value of this component can be estimated. For example, the impact of prolonged exposure on PTSD symptoms was compared to present-centered therapy, which was designed as placebo control. It explicitly excluded exposure to the trauma and thus did not focus on the trauma experience [25]. In this particular study, superiority of prolonged exposure over present-centered therapy was small to moderate, and meta-analyses revealed mixed findings with a small, moderate or large superiority of specific, trauma-focused PTSD treatments over placebo control treatments [7, 13, 14]. While the placebo-controlled RCT in the best case allows for estimating the amount of the treatment effect that may be attributed to the *characteristic* vs. *incidental* components, there is still no information on which out of several rival treatment packages contains the most relevant components, that is which treatment should be considered the treatment of choice for a particular problem or disorder.

Therefore, a third type of control treatments in RCTs may encompass treatments with established efficacy (i.e. comparing treatment A vs. treatment B). Such comparative designs are typically used for demonstrating superiority of a novel treatment compared with an established one. If a novel treatment consists of an amendment to an established treatment, the dismantling or add-on study design may be applied in order to demonstrate the incremental benefit of adding or removing a particular treatment component to or from a complex treatment package. Superiority of the novel treatment over an established one would of course be attributed to the superior efficacy of the unique component(s) of the novel treatment. If, however, such a study demonstrated equivalence in treatment effects of the two compared treatments, symptom improvements in both treatments are most likely mediated by common or shared mechanisms [26, 27]. In PTSD outcome research, for example, prolonged exposure plus cognitive restructuring was compared with exposure alone in one RCT, in order to estimate the incremental effect of adding cognitive restructuring to the established exposure treatment [28]. This particular RCT failed to demonstrate superiority of adding cognitive restructuring to an exposure treatment and also meta-analyses that summarized comparative RCTs of individual PTSD treatments found no statistically significant differences between the effects of two types of PTSD treatments (e.g. [13, 14, 23, 29–34]). The equivalent effects of the diverse PTSD treatments have been explained by the presence of a shared mechanism in all of the successful treatments, namely that all treatments focus on the trauma experience [23].

RCT minimizes most threats to the internal validity (e.g. controls for spontaneous remission and regression to the mean). Therefore, such a design may be used for showing that a psychotherapeutic treatment is efficacious. With this study design, a number of meta-analyses demonstrated large effect sizes (ESs) for eye-movement desensitization and reprocessing (EMDR), cognitive treatments, exposure-based treatments and the combination of the latter to cognitive-behavioral treatments (CBT) (e.g. [13, 14, 23]), even though treatment effects may be overestimated in studies with small sample size [14]. However, with respect to the research questions highlighted in the present review, such design does not provide an answer: first, a larger effect size of an assumed study comparing treatment A vs. WL, as compared to a second study comparing treatment B vs. WL, may not be interpreted as superiority of A over B, if A has not been shown to be superior to B in a comparative RCT. A number of study characteristics, which may differentiate between the two assumed studies—A vs. WL and B vs. WL, such as different patient samples, different therapists, different study methodology and design might explain a larger effect in one comparison than in the other. Second, such design does also not tell which *characteristic* treatment components are critical for symptom improvement, because the amount of the total treatment effect that is due to the *characteristic* vs. *incidental* components cannot be disentangled. Thus, such design may not answer the question whether

In order to control for the *incidental* effects and to evaluate the impact of *characteristic* treatment components, in a second type of RCTs, psychotherapeutic treatments are compared with psychological placebos. Superiority of the psychotherapeutic treatment over the psychological placebo could be specifically attributed to the *characteristic* treatment components, which were lacking in the placebo control. Thus, by manipulating the presence of a particular component, the incremental value of this component can be estimated. For example, the impact of prolonged exposure on PTSD symptoms was compared to present-centered therapy, which was designed as placebo control. It explicitly excluded exposure to the trauma and thus did not focus on the trauma experience [25]. In this particular study, superiority of prolonged exposure over present-centered therapy was small to moderate, and meta-analyses revealed mixed findings with a small, moderate or large superiority of specific, trauma-focused PTSD treatments over placebo control treatments [7, 13, 14]. While the placebo-controlled RCT in the best case allows for estimating the amount of the treatment effect that may be attributed to the *characteristic* vs. *incidental* components, there is still no information on which out of several rival treatment packages contains the most relevant components, that is which treatment

should be considered the treatment of choice for a particular problem or disorder.

Therefore, a third type of control treatments in RCTs may encompass treatments with established efficacy (i.e. comparing treatment A vs. treatment B). Such comparative designs are typically used for demonstrating superiority of a novel treatment compared with an established one. If a novel treatment consists of an amendment to an established treatment, the dismantling or add-on study design may be applied in order to demonstrate the incremental benefit of adding or removing a particular treatment component to or from a complex treatment package. Superiority of the novel treatment over an established one would of course be attributed to the superior efficacy of the unique component(s) of the novel treatment. If,

a trauma focus is necessary for the successful PTSD treatment.

232 A Multidimensional Approach to Post-Traumatic Stress Disorder - from Theory to Practice

Thus, regarding the first research question, whether a particular treatment package outperforms the others, meta-analyses on comparative RCTs mostly indicated rather similar effects of different treatment packages and thus no superiority of particular *characteristic* components over other *characteristic* components. Regarding the second research question, the results of placebo-controlled PTSD RCTs at first sight may be considered as confirming the assumption that successful PTSD treatment requires the *characteristic* component of focusing on the trauma experience. However, upon closer examination, a substantial amount of unexplained betweenstudy heterogeneity indicates the presence of moderators in several of the abovementioned meta-analyses [7, 13, 14, 23, 30, 32, 34], which complicates or even precludes drawing valid conclusions [35].
