Handbook of Consultation-Liaison Psychiatry

4. Evaluating the Evidence Base of Consultation-Liaison Psychiatry

Tom Sensky


4.1 The Evidence Base

4.2 Development of Pharmacologic Interventions

4.3 Development of Complex Interventions

4.4 Problems in Defining Appropriate and Robust Outcome Measures

4.5 Comparison Groups in Randomized Controlled Trials

4.6 Problems with Concealment of Randomization

4.7 Patient Preference Trials

4.8 Intervention Integrity

4.9 Dealing with Missing Data

4.10 The Role of Randomized Controlled Trails in Consultation-Liaison Psychiatry

This chapter draws on the evidence base of consultation-liaison (CL) psychiatry to ask two questions: Is research in CL psychiatry more challenging or more complex than research in general psychiatry? If so, what factors make it more challenging? The focus here is exclusively on research involving adults, although there is a growing evidence base involving children and adolescents. In addition, the chapter addresses mainly intervention studies.

4.1 The Evidence Base

An approximate overall profile of the evidence base can be gained by examining the design of studies published in the specialist CL psychiatry and psychosomatic medicine journals compared with those in the leading general psychiatry journals (Fig. 4.1). This assessment reveals that papers published in specialist CL psychiatry and psychosomatic medicine journals in the past decade focus predominantly on diagnosis and etiology. The overall focus of published papers is very similar in the specialist journals to that in the general psychiatry ones. If the focus of published work can be assumed to reflect the state of knowledge, this suggests that the knowledge base of CL psychiatry does not lag substantially behind that in general psychiatry. Note, however, that these results must be interpreted cautiously, because they are derived from Medline data aggregated by journal; not all papers in the specialist CL journals are in fact about CL psychiatry, and some papers published in the general psychiatry journals are about CL psychiatry.

Fig. 4.1 Focus of papers in general vs. specialist psychiatry journals. Medline classifications of research design for papers published in five CL psychiatry and psychosomatic medicine journals (Psychotherapy and Psychosomatics, Journal of Psychosomatic Medicine, General Hospital Psychiatry, Psychosomatic Medicine, and Psychosomatics) compared with papers published in Archives of General Psychiatry, American Journal of Psychiatry, and British Journal of Psychiatry. Note that the categories add up to greater than 100% because some papers had more than one focus.

A more focused and rigorous analysis was carried out by Ruddy and House (2005). They examined systematic reviews on the psychological effects of physical illness or treatment, somatoform disorders, and self-harming behavior, searching in particular for meta-analyses, which could be taken as an indication of the availability of good-quality primary research papers. In fact, only 14 of 64 systematic reviews identified included meta-analyses, and the authors concluded that many areas of CL psychiatry lacked robust research evidence. The meta-analyses identified focused particularly on interventions (psychological as well as biological). The study failed to find any meta-analyses concerning assessment or service development.

The conclusions of the review above are supported by an examination of published randomized controlled trials in each of the five specialist CL psychiatry journals (Fig. 4.2). For this, a basic appraisal (Guyatt et al., 1993) was conducted of 40 randomized controlled trials, comprising the eight most recently published randomized controlled trials in each of the five specialist CL psychiatry journals up to 2005. The results indicate that these published studies tended to lack information regarding predicted sample size, randomization, and treatment evaluation. In addition, only a minority employed intention-to-treat analysis, creating potential problems in applying their findings to routine clinical practice (see below).

4.2 Development of Pharmacologic Interventions

In the development of therapeutic medications, research usually begins with basic science and then continues with animal work. The next stage is usually to evaluate the efficacy of the new medication, aiming to answer the question whether it provides benefit under strictly controlled conditions. If efficacy can be demonstrated, the next step is to evaluate the effectiveness: Does the new medication provide benefit under conditions as close as possible to routine clinical practice? Once sufficient evidence has been accumulated, a product license is sought. The conditions of use of the new medication are explicitly defined by the product license. Departing from the conditions of the product license (for example, using the new medication for an indication for which it has not been licensed, or in a different patient group) is usually possible only under strictly defined circumstances.

These principles apply to medication trials in CL psychiatry just as in the rest of medicine. In this context, it is important to note that standard methods of critical appraisal of interventions have been developed principally to evaluate pharmacologic interventions. Many interventions in CL psychiatry are substantially more complex in their design than randomized controlled trials of medications and, as will be argued, possibly require more complex appraisal.

4.3 Development of Complex Interventions

Complex interventions, such as psychotherapeutic ones, often derive from clinical experience, and tend to emerge first in the published literature as clinical case studies and open case series (Sensky, 2005). Research usually then progresses to effectiveness studies, and only when effectiveness has been demonstrated do researchers focus on examining how and why the intervention works. There is no equivalent for complex interventions of the product license, and no obligation on clinicians or researchers to attempt to use the intervention in a strictly controlled way.

Most nonpharmacologic interventions in CL psychiatry probably qualify as complex interventions. The development of psychotherapeutic interventions in CL psychiatry, as in other settings, often follows the sequence just described, culminating in randomized controlled trials. All psychotherapeutic interventions can be conceptualized in terms of two key elements: professional service and personal attachment (Guthrie and Sensky, 2007). In evaluating published work, both elements need to be operationalized and measured. This is a considerable challenge, as the rest of this chapter illustrates.

Fig. 4.2 Quality of papers reporting interventions in CL psychiatry and psychosomatic medicine. Note that of the 40 studies evaluated, 17 were of pharmacologic interventions, for which use of treatment manuals and evaluation of the quality of the interventions was not applicable.

Some psychotherapists have criticized the emphasis placed on evidence from randomized controlled trials. For example, it has been argued that in the clinic, psychotherapeutic interventions are seldom of fixed duration, as they are in research studies, and that psychotherapy is self-correcting rather than being tied to manualised techniques (Seligman, 1996). Others have argued that randomized controlled trials tend to exclude those patients most likely to be seen in everyday clinical practice (Persons and Silberschatz, 1998). However, some of these arguments are based on a failure to distinguish between trials assessing efficacy and those assessing effectiveness (Sensky, 2005). Nevertheless, there are difficulties particular to complex interventions that make their evaluation using randomized controlled trials more complex than are most psychopharmacologic interventions. Some of these difficulties are considered below.

More generally, randomized controlled trials have been criticized for depriving patients and clinicians of the flexibility to choose the most appropriate intervention for the individual patient's circumstances (Taylor et al., 1984). A basic principle underlying use of randomized controlled trials is that of equipoise, the assumption that at the time of randomization, it is genuinely impossible to favor one treatment arm over the other (Weijer et al., 2000). Clearly, this does not apply in many instances. Nevertheless, a recent meta-analysis, comparing the outcomes of participants in randomized controlled trials and those of eligible nonparticipants found that participating in a randomized controlled trial did not lead to worse outcomes than being given the intervention of choice (Gross et al., 2006). Another creative proposal to manage this potential problem is to use equipoise-stratified randomization (Lavori et al., 2001).

4.4 Problems in Defining Appropriate and Robust Outcome Measures

In CL psychiatry as elsewhere, complex interventions are likely to have complex effects. Thus, for example, a cognitive therapy intervention for people recruited only because they had recent-onset rheumatoid arthritis initially led to a greater reduction in depressive symptoms than in the waiting list control group, but after longer follow-up, the intervention led to a significant reduction in pain and disability (Sharpe et al., 2003b). Similarly, there is substantial evidence that cognitive therapy improves symptoms of depression and anxiety in people with physical illness, as it does in those who are physically healthy. However, such interventions can have a wide range of additional benefits, including improvement in functioning, physical symptoms, and even biological markers of disease (Sensky, 2004).

One way to attempt to identify the effects of complex interventions is to include multiple outcome measures in randomized controlled trials. These are more likely to capture the complexity of changes attributable to the intervention. However, this makes matching study groups at baseline much more difficult, particularly because, with complex interventions, funding and other resource constraints tend to limit recruitment to relatively modest sample sizes. Also, using multiple outcome measures goes against one of the key requirements of effectiveness studies, namely to have one or a small number of simple and clinically relevant outcome measures (Hotopf et al., 1999).

4.5 Comparison Groups in Randomized Controlled Trials

In pharmacologic trials, the new medication can be compared with a placebo. With complex interventions, it is extremely difficult to design an effective placebo. This has important consequences particularly for concealment of randomization and for patient expectations.

In essence, studies take one of three approaches, each of which has advantages and disadvantages. Comparing the experimental intervention with another model-based intervention might help to control for patient expectations, but it gives no information about how the new intervention compares with routine clinical care. Another option, suitable for psychological interventions, is to design the control intervention to have the same amount of therapy time (or, more particularly, therapist contact) as the experimental intervention. However, such interventions are very difficult to design, particularly given that, in getting proper informed consent to participate in such studies, patients are clearly aware of having been randomized to the control group. There is a considerable risk of increased patient attrition from the control group. This is probably more of a problem with this type of study design than with the third type of control group routine clinical care. Again, with this design, patients know which group they are in, and thus it is not possible to control for patient expectations. In addition, comparisons with routine clinical care cannot alone yield results supporting the specific, model-based, effects of the intervention; if the experimental intervention is significantly better than the control, this could be attributable (in part or entirely) to nonspecific factors, such as therapist contact or patient expectations.

4.6 Problems with Concealment of Randomization

As already noted, it is very difficult to design studies of complex interventions, such as most of those in CL psychiatry, where the patient is unaware of the group to which he or she has been allocated. Theoretically, at least, this may be a fatal flaw in the design of some studies. In a study of 320 trials derived from meta-analyses from the Cochrane database, Schultz and colleagues (1995) found evidence that reported outcomes were significantly more favorable in those studies that showed inadequacies in randomization, compared with those where randomization was robust. On the other hand, in a more recent meta-analysis of interventions in cardiology, no significant differences were found in outcomes between observational studies and randomized controlled trials (Benson and Hartz, 2000). This indicates that the need for concealment of randomization is perhaps not quite as crucial as some have argued, although this particular study generated a critical editorial that suggested that selection bias might have operated in the papers included in the analysis (Pocock and Elbourne, 2000).

Patient expectations are generally considered important and are likely to be very different in the experimental and control arms of randomized studies where concealment is impossible. In a well-known study of patient expectations (Tarrier and Main, 1986), a randomized controlled trial was conducted of applied relaxation for generalized anxiety. Half the patients in each treatment group were told that they should expect continuing benefit if they practiced relaxation regularly; the other half were told that benefits would be slow to become apparent. At outcome, the second group was significantly more anxious than the first. Expectations apply to those conducting the interventions as well as to patients. The relationship between patient and therapist has long been recognized as very influential in determining intervention outcomes (Luborsky et al., 1985), and this very likely related to patient and therapist expectations, as well as a multitude of other factors.

Even if, in complex interventions in the CL psychiatry, it is impossible to conceal group allocation from either patient or therapist, there is usually the possibility of keeping the treatment group concealed from the clinicians involved in the patient's care, and from those assessing the patients. This is usually tested by comparing the actual group to which patients have been allocated with the group allocation "guessed" by the clinician or researcher. Although such details are commonly reported in published studies, they are probably meaningless. In a study of a psychological intervention in rheumatoid arthritis, the assessor (from whom group allocation had been concealed) estimated the groups to which individual patients had been allocated no better than by chance (Sharpe et al., 2003a). However, those patients whom the assessor "guessed" to be in the active intervention group were significantly more likely than those allocated to the control group to show improvements in the particular outcomes investigated in the study. This is hardly surprising; even if assessors are unaware of the group allocation, they are almost always familiar with the study hypotheses, and are therefore likely to infer that those patients who perform better on the study outcomes must have been in the experimental group.

4.7 Patient Preference Trials

One way to overcome some of these problems is to design intervention studies that incorporate patient preferences. In essence, patients expressing a preference for one or another of the treatment arms are allowed to have that intervention, and only patients who expressed no preference are randomized. A major limitation in applying patient preference trials in CL psychiatry is the fact that substantially larger numbers of patients are required than in conventional randomized controlled trials. Although one would expect that patients assigned to an intervention of their choice would have better outcomes than those as signed by randomization, a recent elegant meta-analysis by King and colleagues (2005) reported contrary results. Not only were there no significant differences in effect sizes of the outcome measures in the two types of study design, but patient attrition was also very similar when comparing randomized and patient-preference arms of studies.

4.8 Intervention Integrity

In randomized controlled trials of medications, it is relatively easy to standardize the interventions. Treatment integrity in more complex interventions is frequently modeled on that applicable to medication trials, but this is unlikely to be appropriate. For example, there is no equivalent in medication trials to therapist skill in more complex interventions. In addition, skill is not the same as competence. Complex interventions should be operationalised (that is, broken down into discrete, definable and measurable steps) and the actual interventions audited against these. However, this is hardly ever done. Where possible, interventions should be based on a treatment manual. While this might contribute to demonstrating that all patients treated have received similar (and, one hopes, equivalent) interventions, this does not necessarily mean that the interventions have been optimal in every case. It is likely that experienced and skilled therapists depart from the strict limits imposed by a treatment manual. One possible solution, which has hardly ever been applied, is to allocate patients to specific therapists stratified by clinical experience.

In model-based psychotherapies, changes in the outcome measures are expected to occur in parallel with, or following, changes in mediating variables predicted by the model. For example, cognitive therapy is expected to lead to changes in cognitions. However, while this effect is helpful when it can be demonstrated, the absence of such a link between outcomes and predicted mechanisms does not imply that treatment integrity is questionable. In an early study comparing cognitive therapy and antidepressants in the treatment of depression, changes in cognitions in the antidepressant group paralleled those in the group treated with cognitive therapy (Simons et al., 1984).

In clinical practice, one of the hallmarks of CL psychiatry is the complex context in which it is practiced. It is not only the relationship between patient and psychiatrist that is important, but also the relationships that each has with the other clinicians treating the patient. All these relationships are also important in research, but are difficult if not impossible either to control adequately (except in some tightly controlled efficacy studies) or to evaluate in a way that can be incorporated into study results.

4.9 Dealing with Missing Data

Results of efficacy studies can legitimately be based only on samples of patients who completed the intervention. However, because the results of effectiveness studies are intended to reflect what happens in clinical practice, they must take account of all the patients recruited into the study. Intention-to-treat analysis is sometimes taken to mean simply that patient results are analyzed in the group to which each patient was initially assigned. As this almost always happens in complex interventions, this becomes a trivial requirement. The other requirement of intention-to-treat analysis is that results include data from all patients, even those who left the study prematurely. In other words, missing data must somehow be modeled. It is not uncommon for studies claiming to have used intention-to-treat analysis to have neglected this requirement.

In internal medicine, particularly in trials of medication, a common method of modeling missing data is to use the last observation carried forward (LOCF). For example, if a particular patient has outcome data for 12 weeks after the start of a study, but data are missing for the next time point at 16 weeks, the missing data are simply replaced by data from the last available time point (in this case, 12 weeks). While this might sometimes be appropriate in pharmacological interventions, it is very seldom appropriate to use in complex interventions, and not only may be grossly inaccurate, but also can even distort the results. For example, consider an intervention aiming to reduce chronic pain being tested against a control intervention in which the patient meets with someone to talk about the pain (to control for nonspecific therapist contact). As noted above, proper informed consent means that patients will always be aware that they are in the intervention group or control group. It would not be surprising if patients in the control group were more likely than those in the intervention group to withdraw from the study prematurely. In reality, pain is unlikely to remain constant but within any particular sample will probably be reduced. Such regression to the mean contributes to changes in both groups. However, if sample attrition is greater in the control group than in the intervention group, using LOCF exaggerates the difference between the two groups, thus making the effects of the intervention appear more substantial than they are. In reality, the rate and pattern of change varies among patients, and this must be taken into account when modeling missing data. Sophisticated data modeling techniques are available to cope with this, but appear to be relatively seldom used. Streiner (2002) has demonstrated the differing results of several methods of dealing with missing data.

4.10 The Role of Randomized Controlled Trails in Consultation-Liaison Psychiatry

The methodology of randomized controlled trials has been particularly well developed to manage pharmacologic studies. This chapter has highlighted a number of ways in which the complex interventions characteristic of CL psychiatry create difficulties in generating high-quality evidence from randomized controlled trials. It is worth noting that many of the difficulties highlighted are by no means specific to CL psychiatry. Most of the difficulties outlined above are equally applicable to developing the evidence base in surgery (McCulloch et al., 2002). Because conducting and interpreting randomized controlled trials in CL psychiatry is challenging, this is not a reason for the abandoning their pursuit. On the contrary, randomized controlled trials will continue to make an important contribution to developing the evidence base in CL psychiatry. Results from randomized controlled trials are still regarded as the main currency in justifying new service developments. However, as Salkovskis (2002) has argued, it is also important not to focus exclusively on evidence from randomized controlled trials, to the exclusion of other types of evidence. As noted above, when a medication is granted a product license, its indications and usage becomes proscribed. However, this is not the case for complex interventions. Although randomized controlled trials probably mark important milestones in the development of particular complex interventions, there is little expectation that such interventions will remain static, and indeed, by the time an intervention is "mature" enough to justify one or more randomized controlled trials, it is very likely that it will have developed further by the time these trials have been published.


Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 2000;342:1878-1886.

Gross CP, Krumholz HM, Van Wye G, Emanuel EJ, Wendler D. Does random treatment assignment cause harm to research participants? PLoS Med 2006:3:e188.

Guthrie E, Sensky T. Psychological interventions in patients with physical symptoms. In: Guthrie E, Lloyd G, eds. Textbook of Liaison Psychiatry. Cambridge: Cambridge University Press, 2007.

Guyatt GH, Sackett DL, Cook DL. Users' guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? JAMA 1993;270:2598-2601.

Hotopf M, Churchill R, Lewis G. Pragmatic randomised controlled trials in psychiatry. Br J Psychiatry 1999;175:217-223.

King M, Nazareth I. Lampe F, et al. Impact of participant and physician intervention preferences on randomized trials: a systematic review. JAMA 2005;293:1089-1099.

Lavori PW, Rush AJ, Wisniewski SR, et al. Strengthening clinical effectiveness trials: equipoise-stratified randomization. Biol Psychiatry 2001;50:792-801.

Luborsky L, McLellan AT, Woody GE, O'Brien CP, Auerbach A. Therapist success and its determinants. Arch Gen Psychiatry 1985;42:602-611.

McCulloch P, Taylor I. Sasako M, Lovett B, Griffin D. Randomised trials in surgery: problems and possible solutions. Br Med J 2002;324:1448-1451.

Persons JB, Silberschatz G. Are results of randomized controlled trials useful to psychotherapists? J Consult Clin Psychol 19980:126-135.

Pocock SJ, Elbourne DR. Randomized trials or observational tribulations? N Engl J Med 2000:342:1907-1909.

Ruddy R, House A. Meta-review of high-quality systematic reviews of interventions in key areas of liaison psychiatry. Br J Psychiatry 2005;187:109-120.

Salkovskis PM. Empirically grounded clinical interventions: cognitive-behavioural therapy progresses through a multi-dimensional approach to clinical science. Behav Cogn Psychother2002:30:3-9.

Schulz KF, Chalmers I. Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408-412.

Seligman ME. Science as an ally of practice. Am Psychol 1996;51:1072-1079.

Sensky T. Cognitive therapy with medical patients. In: Wright J, ed. American Psychiatric Association Press Review of Psychiatry, vol 23(3). Washington, DC: American Psychiatric Publishing, 2004:83-121.

Sensky T. The effectiveness of cognitive therapy for schizophrenia: what can we learn from the meta-analyses? Psychother Psychosom 2005;74:131-135.

Sharpe L, Ryan B, Allard S, Sensky T. Testing for the integrity of blinding in clinical trials: how valid are forced choice paradigms? Psychother Psychosom 2003a:72:128-131.

Sharpe L, Sensky T, Timberlake N, Ryan B, Allard S. Long-term efficacy of a cognitive behavioural treatment from a randomized controlled trial for patients recently diagnosed with rheumatoid arthritis. Rheumatology 2003b:42:435-441.

Simons AD, Garfield SL, Murphy GE. The process of change in cognitive therapy and pharmacotherapy for depression. Arch General Psychiatry 1984;41:45-51.

Streiner DL. The case of the missing data: methods of dealing with dropouts and other research vagaries. Can J Psychiatry 2002;47:68-75.

Tarrier N, Main CJ. Applied relaxation training for generalised anxiety and panic attacks: the efficacy of a learnt coping strategy on subjective reports. Br J Psychiatry 1986;149:330-336.

Taylor KM, Margolese RG, Soskolne CL. Physicians' reasons for not entering eligible patients in a randomized clinical trial of surgery for breast cancer. N Engl J Med 1984;310:1363-1367.

Weijer C, Shapiro SH, Cranley GK. For and against: clinical equipoise and not the uncertainty principle is the moral underpinning of the randomised controlled trial. Br Med J 2000;321:756-758.

If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!