Practical Transfusion Medicine 4th Ed.

46. Getting the most out of the evidence for transfusion medicine

Simon J. Stanworth1, Susan J. Brunskill2, Carolyn Doree2, Sally Hopewell3 & Donald M. Arnold4

1University of Oxford and NHS Blood and Transplant and Department of Haematology, John Radcliffe Hospital, Oxford, UK

2NHS Blood and Transplant, Systematic Review Initiative, Oxford, UK

3UK Cochrane Centre, Oxford, UK

4Department of Medicine, McMaster University and Canadian Blood Services, Hamilton, Ontario, Canada

What is meant by evidence-based medicine?

Evidence-based medicine (EBM) has been described by Sackett as ‘the integration of best research evidence with clinical expertise and patient values’ [1]. Proponents of EBM have particularly highlighted the nature of the evidence that is used to make clinical decisions, i.e. where is it from, how believable is it, how relevant is it to my patient and can it be supported by other data? However, evidence is only one of the factors driving clinical decision making, and clinicians will also need to consider the available resources and opportunities, individual patients' values and needs (physical, psychological and social), local clinical expertise and cost. In some situations, clinical judgement will determine that the available evidence for a specific problem is not applicable.

EBM is not just about obtaining and evaluating clinical research evidence; it is also a means by which effective strategies for self-learning can be applied, aimed at continuously improving clinical performance. The focus of this chapter will be to discuss core elements of EBM with particular reference to clinical research in transfusion medicine and to provide a practical approach to critical appraisal and study design.

Hierarchies of clinical evidence

Health research studies are designed to ultimately show evidence of causality. While causality is extremely difficult (or impossible) to prove, hierarchical levels of evidence provide increasing support for such association. Optimal evidence is the best evidence available to answer a question. Data derived from randomized controlled trials (RCTs) have generally been regarded as the strongest support for evidence of efficacy or effectiveness.

In 1948, the first modern RCT in medicine was published comparing streptomycin and bedrest for patients with pulmonary tuberculosis [2]. The authors chose to perform a controlled trial because ‘the natural course of pulmonary tuberculosis is in fact so variable and unpredictable that evidence of improvement or cure following the use of a new drug in a few cases cannot be accepted as proof of the effect of that drug’. In that trial, assignment of patients to streptomycin or bedrest was done by ‘reference to a statistical series based on random sampling numbers drawn up for each sex at each centre’. There were fewer deaths in the patients assigned to streptomycin (4 out of 55 patients) compared to bedrest alone (14 out of 52 patients) [2]. If the process of randomization is done correctly, differences in outcome(s) between groups should be attributable to the intervention and not to other confounding factors related to the patients demography, study setting or quality of care.

The most common (and simple) design for an RCT is a parallel design, in which participants are randomly allocated to one of two groups. However, the RCT design comes with inherent challenges:

·        RCTs are costly and logistic problems can arise if these studies are conducted at multiple centres (which is necessary for large trials).

·        Small RCTs may overestimate the effect of the intervention and may place too much emphasis on those outcomes with more striking results.

·        Small RCTs may be designed to detect unreasonably large treatment effects (which they will never be able to show because of their small size).

·        RCTs with nonsignificant results may never be fully reported or only found in abstract form – a phenomenon known as publication bias.

·        Effects of interventions may be overgeneralized and inappropriately applied to different patient populations.

·        RCTs are not suited to investigating low frequency rare adverse effects, prevalence rates or diagnostic criteria.

In contrast to RCTs, observational studies, such as cohort or case–control studies, whether prospective or retrospective, may demonstrate an association between intervention and outcome; however, it is often difficult to be sure that this association does not reflect the effects of unknown confounding factors. The influence of confounding factors and biased participant selection can dramatically distort the accuracy of the study findings in observational studies. This does not mean that findings from well-designed observational studies should be disregarded; such study designs can be very effective in establishing or confirming effects of large size. Interpretation is more difficult when the observed effects are small. Clinical questions addressing possible aetiology or monitoring adverse effects may be more suited to observational studies.

The above points have highlighted some of the limitations with both RCTs and observational studies. In order to identify any limitations in a study and understand the possible impact of such limitations, it is important for readers, and investigators gearing up to design their own studies, to know how to appraise the methodological quality of the research. Critical appraisal and evaluation will be discussed next.

Appraisal of primary research evidence for its validity and usefulness

One component of EBM is the critical appraisal of evidence generated from a study. Published RCTs should report sufficient detail pertaining to the study design, population, condition, intervention and outcome to allow the reader to make an independent assessment of the trial's strengths and weaknesses. Guidelines and checklists have been designed to help with the reporting and assessment of RCTs, such as those based on the CONSORT statement [3,4]. As shown in Table 46.1, key components of the critical appraisal process for clinical trials relate to the methodology of the study (the participants, interventions and comparators, the outcomes, the sample size, the methods used for the randomization process, and whether research staff were blinded to treatment allocation) and the reporting of the results (the numbers randomized and the numbers analysed/evaluated, the numbers not available for analysis with reasons and the role of chance, i.e. confidence intervals). Inadequate methodology and poor reporting of the study methods and its findings does not provide the needed reassurance to readers that patient selection, study group assignment and outcome detection were not prone to bias, which may result in inaccurate inferences drawn from the data. Critical appraisal guidelines are also useful for authors of primary research because they define the information that should be included in their published reports.

Table 46.1 Key components of the critical appraisal process for clinical trials. Reproduced from The Critical Appraisal Skills Programme worksheets, Milton Keynes Primary Care Trust, 2002.

Did the study ask a focused question?

Was the allocation of participants to the study arms appropriate?

Were the study staff and participants unaware (blind) to the treatment allocation?

Were all the participants who entered the study accounted for within the results?

Were all the participants followed up and data collected in the same way?

Was the study sample size big enough to minimize any play of chance that may occur?

How are the results presented and what is the main result?

How precise are the results?

Were all the important outcomes for this patient population considered?

Can the results be applied to practice/different populations?

One aspect of trial appraisal concerns the understanding of chance variation and sample size calculation. One needs to distinguish between ‘no evidence of effect’ and ‘evidence of no effect’: the former may be derived from results that are either underpowered or nonsignificant, whereas the latter implies a sufficient sample size to show superiority, equivalency or noninferiority. Information about sample size calculations should therefore be provided in the published report of clinical trials.

Comparable standards can be applied to the critique of observational studies using a framework called STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) [5,6].

Reviews: narrative and systematic

Reviews have long been used to provide summary statements of the evidence for clinical practice. Reviews can be narrative or systematic. Often written by experts in the field, narrative reviews provide an overview of the relevant findings, as well as being educational and informative. However, narrative reviews summarize the evidence based on what the authors feel is important. On the other hand, systematic reviews gather the totality of the evidence on a subject and summarize it in an objective way using prespecfied methods for study identification, selection, quality assessment and analysis that limits bias.

Systematic reviews aim to be more explicit and less biased in their approach to reviewing a subject than traditional (narrative) literature reviews and they can provide a synthesis of results of primary studies, making them more accessible to clinicians and policy makers. Systematic reviews also form the background for clinical trial design by establishing what is currently known, what methods were used to achieve that knowledge and what gaps remain. Systematic reviews are not substitutes for adequately powered clinical trials, but should be considered as complementary methods of clinical research.

There are generally accepted ‘rules’ about how to undertake a systematic review, which include:

·        developing a focused review question;

·        comprehensively searching for all material relevant to this question (Table 46.2 provides some practical suggestions when developing a more comprehensive search strategy);

·        using explicit criteria to assess eligibility and methodological quality of identified studies;

·        reporting and explaining why studies were excluded; and

·        using explicit methods for combining data from primary studies including, where appropriate, meta-analysis of the study data.

Table 46.2 List of selected sources that can be searched to identify reports of trials and clinical evidence.

c46f001

Meta-analysis, strictly speaking, refers to mathematically pooling data from primary studies. This method is acceptable for a systematic review when primary studies are sufficiently homogeneous in their design and quality to show any difference in treatment effect between the two treatment groups.

Results from each study within a systematic review are typically presented in the form of a graphical display, called a ‘forest plot’. A hypothetical example is shown in Figure 46.1. The result for the outcome point estimate in each trial is represented by a square, together with a horizontal line that corresponds to the 95% confidence intervals (CIs). For summary statistics of binary or dichotomous data, effect measures are typically summarized as either a relative risk or an odds ratio (for definitions, see Figure 46.1). The 95% CI provides a very useful measure of effect, in that it represents the range of values that will contain the true size of treatment effect 95% of the time, should the study be repeated again and again. The solid vertical line corresponds to no effect of treatment (or a relative risk of 1.0 for the analysis of dichotomous data, see Figure 46.1). Forest plots, therefore, are a visual representation of the size of treatment effects between different trials and allow the reader to assess:

·        the effect of treatment by examining whether the bounds of the confidence interval exceed or overlap the minimal clinically important benefit;

·        the consistency of the direction of the treatment effects across multiple studies; and

·        outlying results from some studies relative to others.

Fig 46.1 A hypothetical forest plot.

c46f001

Figure 46.2 provides an overall guide for assessing the validity of evidence for treatment decisions for the different types of studies, trials and reviews mentioned in this section. Although sometimes criticized for their overemphasis on methodology at the expense of clinical relevance, and the inappropriate use of meta-analysis, systematic reviews have an important place in clinical practice as a means of transparently summarizing evidence from multiple sources. As for RCTs, guidelines for the reporting of systematic reviews have been developed including PRISMA (preferred reporting items for systematic reviews and meta-analyses) for the reporting of systematic reviews of RCTs and MOOSE (meta-analysis of observational studies in epidemiology) for the reporting of systematic reviews of observational studies [7,8]. Quality assessment tools have also been development for critical appraisal of systematic reviews (the Critical Appraisal Skills Programme) (Table 46.3).

Evaluating systematic reviews and guidelines

The GRADE (Grading of Recommendations Assessment, Development and Evaluation) evaluation tool has been devised as a system for rating the quality of evidence in systematic reviews and grading the strength of recommendations in guidelines. The system is designed for reviews and guidelines that examine alternative management strategies or interventions, which may include no intervention or current best management. An example relevant to transfusion medicine is the recent guidelines on immune thrombocytopenia from the American Society of Hematology, which utilized GRADE methodology to evaluate the strength of recommendations [9,10].

Table 46.3 Key components of the critical appraisal process for systematic reviews. Reproduced from The Critical Appraisal Skills Programme worksheets, Milton Keynes Primary Care Trust, 2002 and Systematic Review Initiative NBS in-house worksheets.

Did the review ask a clearly focused question?

Did the reviewers try to identify all relevant studies?

Were the eligibility criteria of the included studies detailed in the review?

Did the reviewers assess the quality of the included studies?

Have the results of the studies been combined and was it reasonable to do so?

How many studies were included in the review?

What is the main result for each outcome?

How precise are the results?

Were all the important outcomes for the review question considered?

How applicable are these results to clinical practice?

Fig 46.2 A guide for judging the validity of evidence for treatment decisions from different types of studies and reviews.

c46f002

Comparative effectiveness research

Comparative effectiveness research (CER) is gaining support from both researchers and funding agencies, particularly in the USA and Canada. CER is defined as the conduct and synthesis of systematic research comparing different interventions and strategies to prevent, diagnose, treat and monitor health conditions. While experimental study designs like RCTs are highly valued methods of CER, they are costly, resource-intensive and their results may not be easily generalizable to nonstudy patients. Nonexperimental approaches using observational data are also useful tools for CER; however, they are inherently limited by heterogeneous methodologies, diverse designs and susceptibility to bias. As methods of observational studies continue to be refined, the data they derive may become more widely applicable, e.g. advances in the design of clinical registries and the use of encounter-generated data from sources such as electronic medical records.

The informing fresh-versus-old red cell management (INFORM) pilot trial is an example of CER in transfusion medicine [11]. The design was pragmatic; patients were randomized to receive one of two treatments that are already routinely used, thus obviating the need for individual informed consent; data were collected in real time from existing electronic databases, thereby reducing costs; and study procedures were streamlined, enabling randomization of more than 900 patients from a single centre in six months at very low cost. A larger pragmatic RCT with a similar design is planned to answer the question of the risk of mortality with fresh-versus-older blood. These data will inform policy decision around the maximum storage threshold that would optimize the balance between adequate supply and acceptable risk.

Evidence base for transfusion medicine

So, how good is the evidence base for transfusion medicine? As a first step, identification of all relevant RCTs in transfusion medicine would be essential. The Cochrane Collaboration's database of RCTs, the Cochrane Central Register of Controlled Trials (CENTRAL) (updated quarterly) is a good starting point. This database uses sensitive literature search filters that aim to identify all RCTs that have been catalogued on MEDLINE from 1966 and on the European medical bibliographic database, EMBASE, from 1980. High level evidence can be derived not only from methodologically sound RCTs but also from systematic reviews of RCTs. An excellent database of systematic reviews pertaining to transfusion medicine is the NHS Blood and Transplant's Systematic Review Initiative's Transfusion Evidence Library (www.trans fusionevidencelibrary.com), a comprehensive online library of systematic reviews (updated monthly). The Transfusion Evidence Library will also, by 2013, include all RCTs relevant to transfusion medicine.

In addition, other databases of reviews for clinical evidence exist for clinicians, e.g. Bandolier (a print and Internet journal about healthcare using EBM techniques) and DARE (www.crd.york.ac.uk/crdweb). Table 46.2 presents a list of sources that can be searched to identify relevant reports of clinical trials and reviews.

The total number of published systematic reviews relevant to the broad theme of transfusion medicine was approximately 650 as of November 2011. These identified reviews cover topics ranging from the effective use of blood components including red cell thresholds [12] and fractionated blood components to alternatives to blood components and methods to minimize the need for blood in a surgical setting, and to blood safety. It should be noted that the searching filters for this exercise included stem cell and tissue transplantation and that the boundaries between transfusion medicine and other areas of medicine overlap (e.g. a systematic review of resuscitation fluids is relevant to transfusion medicine, critical care and anaesthesia). The search strategy also identified a number of areas of transfusion practice where few published systematic reviews exist, especially donation screening and blood donor selection. In paediatric transfusion practice, there is a paucity of evidence from RCTs or systematic reviews on which to base clinical decisions; notable recent examples of trials include a study of liberal and restrictive red cell transfusion thresholds and the treatment of neonatal sepsis with intravenous immune globulin trial [13–15]. For other clinical settings, even when systematic reviews were identified, many were only able to draw upon information from a very limited number of relevant randomized trials.

Evidence base for transfusion medicine: individual examples

Frozen plasma

Two recent relevant systematic reviews have attempted to address evidence relevant to the clinical use of FFP. The first asked the question about the evidence for whether abnormalities in coagulation tests predict an increased risk of clinical bleeding, as such abnormalities are important drivers for decisions to transfuse FFP. All relevant publications describing bleeding outcomes in patients with abnormalities in coagulation tests prior to invasive procedures were assessed. Overall, the published studies did not support evidence for a predictive value of PT/INR for bleeding [16].

The second systematic review was undertaken to identify and analyse all RCTs examining the clinical effectiveness of FFP (the first review published in 2005 has now been updated) [17,18]. Comprehensive searching of the databases MEDLINE (1966–2002), Embase (1980–2002) and the Cochrane Library (2002, Issue 4) and detailed eligibility criteria, identified 80 RCTs as relevant for inclusion and analysis.

The analysis focused on:

·        Studies of interventions comparing FFP with no FFP/plasma. These studies would be expected to provide the clearest evidence for a direct effect of FFP.

·        Studies of interventions comparing FFP with a nonblood component (e.g. solutions of colloids and/or crystalloids).

·        Studies of interventions comparing FFP with a different blood component or different formulations of FFP, e.g. solvent–detergent and methylene-blue treated.

Few of the identified studies included details of the study methodology (method of randomization, blinding of particicpants and study personnel). The sample size of many included studies was small (mean range 8–78 patients per arm). Few studies took adequate account of the extent to which adverse events might negate the clinical benefits of treatment with FFP. Taken together, many of the identified trials in groups such as cardiac, neonatal and other clinical conditions evaluated a prophylactic transfusion strategy. When these trials evaluating prophylactic usage were more closely assessed as a group in the systematic review, irrespective of clinical setting, it appeared that there was evidence (including from larger trials) for a lack of effect of prophylactic FFP. The overall finding of the review was that, for most clinical situations, RCTs examining the clinical use of FFP are limited.

Platelets

A number of different systematic reviews have been published that have more critically evaluated the evidence underpinning the following questions [19–21]:

·        What is the appropriate threshold platelet count to trigger prophylactic platelet transfusions?

·        What is the optimal dose for platelet transfusions?

·        What is the evidence that a strategy of prophylactic platelet transfusions is superior to the use of platelet transfusions only in the event of bleeding (therapeutic use)?

The results from the four identified trials in the updated Cochrane Systematic Review evaluating different thresholds do not provide assurance that a 10 × 109/L threshold is as safe and effective as 20 × 109/L for all clinical outcomes, and indeed raise the critical question as to whether the combined studies have sufficient power to demonstrate equivalence in terms of the safety of the lower prophylactic threshold. This is because the combined results of the quantitative meta-analysis have confidence intervals that include both detrimental and beneficial effects. In contrast, the forest plots for summarizing the effect sizes in identified trials evaluating different platelet doses for transfusion clearly demonstrate no effect by dose, with much narrower confidence intervals.

The more fundamental question about whether a prophylactic platelet transfusion policy is any better than a therapeutic policy based on the aggressive use of platelet transfusions to treat the onset of clinical bleeding is unproven. The older age of the published identified randomized studies addressing this question raises questions of their applicability to current clinical practice, since the trials were conducted at a time when product specifications and quality control were very different, when supportive care for chemotherapy patients was less advanced and when antipyretics with antiplatelet activity, such as aspirin, were in common use. In addition to the small number of patients randomized in the trials, there was also considerable heterogeneity in the study population in relation to the indications for platelet transfusion, the definition of clinically significant bleeding, the threshold in the prophylaxis arm and in the dose of platelets given. Taken together, these analyses cast doubt on the validity of the data from published trials aimed at evaluating evidence for the effectiveness of prophylactic platelet transfusions.

Alternatives to transfusion

Many patients without haemophilia have now been treated, off-licence, with activated recombinant factor VII (rFVIIa). The patients settings are very diverse, including surgery (especially cardiac), gastointestinal bleeding, liver dysfunction, intarcranial haemorrhage and trauma, for example. Data from 25 RCTs enrolling around 3500 patients have now evaluated the use of rFVIIa as both prophylaxis to prevent bleeding (14 trials) or therapeutically to treat major bleeding (11 trials), in patients without haemophilia [22]. This literature provides the more robust means of assessing the effectivenss and safety of rFVIIa, and formed the basis of a recent updated Cochrane Review. When combined in meta-analysis, the trials showed modest reductions in total blood loss or red cell transfusion requirements (equivalent to less than one unit of red cell transfusion). However, the reductions were likely to be overestimated due to the limitations of the data. For other endpoints, including clinically relevant outcomes, there were no consistent indications of benefit and almost all of the findings in support of and against the effectiveness of recombinant factor VIIa could be due to chance. The one, and important, exception was thromboembolic events. In both groups of trials, there was an overall trend to increased thromboembolic events in patients receiving rFVIIa. The forest plots for total arterial thromboembolic events are shown in Figure 46.3 and reach statististical significance.

Fig 46.3 Forest plot.

c46f003

Common practices of transfusion and interventions to improve transfusion practice

Systematic reviews may also be applied to important questions about the evidence base for common or well-established practices in transfusion [23]. For example, some recent reviews based on observational, nonrandomized studies have addressed:

·        What is the maximum time that one unit of RBCs can be out of the fridge before it becomes unsafe?

·        How often should blood administration sets be changed while a patient is being transfused?

·        Which blood transfusion administration method – one-person or two-person checks – is safest?

It is surprising and salutary to realize that some of these common recommendations appear to have little firm evidence base, yet are commonly reproduced in guidelines and protocols.

Are there limitations to evidence-based practice?

It is important to acknowledge some of the limitations of EBM that have been discussed by critics and supporters alike. EBM alone cannot provide a clinical decision; instead the findings generated from EBM are one strand of input driving decision-making in clinical practice. Each clinician will also need to consider the available resources and opportunities, the values and needs (physical, psychological and social) of the patient, the local clinical expertise and the costs of the intervention. Patients enrolled in clinical trials are not always the same as the individual patients requiring treatment, and generalizing to different clinical settings may not be appropriate. It has also been said that within EBM there is an overemphasis on methodology at the expense of clinical relevance, with the risks of generating conclusions that are either overly pessimistic or inappropriate for the clinical question. Perhaps we need to get away from the mentality that ‘there is no good RCT evidence available to answer this clinical question’ to thinking more about why this should be so, what can be learned from those studies that have already been completed, and what design of trial would answer the main area of uncertainty in this transfusion setting.

This chapter has attempted to explain why it is essential to assess the quality of primary clinical research and consider the risks of evidence being misleading, e.g. in the case of few trials or a failure to identify appropriate clinical research questions. Systematic reviews and the statistical method of meta-analysis are useful tools to achieve this, but, like trials themselves, can become outdated and must be carefully scrutinized to ensure unbiased results. Transfusion medicine is no different from many other branches of medicine, and the evidence base that informs much of the practice has not developed to the point that it can be universally applied with confidence. There is a need to recognize these uncertainties and to identify those transfusion issues that require high priority for clinical research.

Finally, appraising the evidence base for transfusion medicine is one part of improving practice; another is the effective dissemination of the evidence to clinicians. For example, clinicians may not have the time to search and evaluate the evidence themselves given the increasing numbers of publications and journals. As many of the sources are web-based, access at any one moment may be easier but the skills of appraisal need to be regularly maintained. Chapter 48 discusses aspects of changing practice in more detail.

Summary

There has been growing recognition that research, especially empirical research (based on observing what has happened), has been underutilized in making healthcare decisions at all levels. This appears to be as true for transfusion medicine as much as other clinical areas. EBM is an approach to developing and improving skills to identify and apply research evidence to clinical decisions. Even the most ardent proponents of EBM have never claimed it is a panacea, and there is recognition that it should amplify rather than replace clinical skills and knowledge, and be a driver for keeping healthcare practices up-to-date.

Systematic reviews can help bring together relevant literature on a particular problem and assess its strengths, weaknesses and overall meaning. Such reviews can be used in different ways including improving the precision of estimates of effect, generating hypotheses, providing background to new primary research or informing policy. Progress is being made to ensure that most areas in transfusion medicine are being systematically reviewed and some of these have encouraged plans for new RCTs.

Key points

1. The process of EBM consists of question formulation, searching for literature, critically appraising studies (identifying strengths and weaknesses) and decisions around applicability to one's patients.

2. It is essential to assess the quality of primary clinical research and consider the risks of evidence being misleading, e.g. in the case of few trials or a failure to identify appropriate clinical research questions.

3. Systematic reviews of RCTs combine evidence most likely to provide valid (truthful) answers on particular questions of effectiveness, and form an important component to the evaluation of evidence-based practice in transfusion medicine.

4. There is a common perception that much of transfusion medicine practice is based on limited evidence, but this is changing and systematic reviews are an important tool to collate, analyse and update the evidence base.

Acknowledgement

D. Arnold is funded by a New Investigator Award from the Canadian Institutes of Health Research in partnership with Hoffmann-LaRoche.

References

1. Sackett DL, Strauss SE, Richardson WS, Rosenberg W & Haynes RB. Evidence Based Medicine: How to Practice and Teach EBM, 2nd edn. Edinburgh: Churchill Livingstone; 2000.

2. Streptomycin treatment of pulmonary tuberculosis [no authors listed]. Br Med J 1948, 30 October; 2(4582): 769–782. PMID: 18890300.

3. Moher DF, Schulz KF & Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Clin Oral Investig 2003; 7(1): 2–7.

4. Schulz KF, Altman DG & Moher D, for the CONSORT Group. CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials. PLoS Med 2010; 7(3): e1000251. DOI: 10.1371/journal.pmed.100025; Ann Intern Med 2010. 1 June; 152(11): 726–732.

5. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC & Vandenbroucke JP, for the STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med 2007; 147(8): 573–577.

6. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC & Vandenbroucke JP, for the STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007, 20 October; 370(9596): 1453–1457. PMID: 18064739.

7. Moher D, Liberati A, Tetzlaff J & Altman DG. The PRISMA Group (2009) Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(7): e1000097. doi:10.1371/journal.pmed.1000097.

8. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA & Thacker SB, for the MOOSE (Meta-analysis of Observational Studies in Epidemiology) Group. Meta-analysis of observational studies in epidemiology: a proposal for reporting. J Am Med Assoc 2000, 19 April; 283(15): 2008–2012.

9. Guyatt GH, Oxman AD, Schünemann HJ, Tugwell P & Knotterus A. GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology. J Clin Epidemiol 2010, 23 December. Epub ahead of print.

10. Neunert C, Lim W, Crowther M, Cohen A, Solberg Jr L & Crowther MA, American Society of Hematology. The American Society of Hematology 2011 evidence-based practice guideline for immune thrombocytopenia. Blood 2011, 21 April; 117(16): 4190–4207.

11. Heddle NM, Cook RJ, Arnold DM, Crowther MA, Warkentin TE, Webert KE, Hirsh J, Barty RL, Liu Y, Lester C & Eikelboom JW. The effect of blood storage duration on in-hospital mortality: a randomized controlled pilot feasibility trial. Transfusion 2012, 18 January. DOI: 10.1111/j.1537-2995.2011.03521.x. Epub ahead of print.

12. Doree C, Stanworth S, Brunskill SJ, Hopewell S, Hyde CJ & Murphy MF. Where are the systematic reviews in transfusion medicine? A study of the transfusion evidence base. Transfus Med Rev 2010, 24(4): 286–294.

13. Carson JL, Hill S, Carless P, Hébert P & Henry D. Transfusion triggers: a systematic review of the literature. Transfus Med Rev 2002; 16(3): 187–199.

14. Lacroix J, Hébert PC, Hutchison JS, Hume HA, Tucci M, Ducruet T, Gauvin F, Collet JP, Toledano BJ, Robillard P, Joffe A, Biarent D, Meert K & Peters MJ; TRIPICU Investigators, Canadian Critical Care Trials Group, Pediatric Acute Lung Injury and Sepsis Investigators Network. Transfusion strategies for patients in pediatric intensive care units N Engl J Med 2007; 19 April; 356(16): 1609–1619.

15. INIS Collaborative Group; Brocklehurst P, Farrell B, King A, Juszczak E, Darlow B, Haque K, Salt A, Stenson B & Tarnow-Mordi W. N Engl J Med 2011, 29 September; 365(13): 1201–1211. PMID: 21962214.

16. Segal JB & Dzik WH. Paucity of studies to support that abnormal coagulation test results predict bleeding in the setting of invasive procedures: an evidence-based review. Transfusion 2005; 45: 1413–1425.

17. Stanworth SJ, Brunskill SJ, Hyde CJ, McLelland DBL & Murphy MF. Is fresh frozen plasma clinically effective? A systematic review of randomized controlled trials. Br J Haematol 2004; 126: 139–152.

18. Yang L, Stanworth SJ, Hopewell S, Doree C & Murphy M. Is fresh frozen plasma clinically effective? An update of a systematic review of randomized controlled trials. Transfusion 2011 (in press).

19. Cid J & Lozano M. Lower or higher doses for prophylactic platelet transfusions: results of a meta-analysis of randomized controlled trials. Transfusion 2007; 47(3): 464–470.

20. Estcourt L, Stanworth SJ, Hopewell S, Heddle N, Tinmouth A & Murphy MF. Prophylactic platelet transfusion for haemorrhage after chemotherapy and stem cell transplantation. Cochrane Database Syst Rev 2004/2011 (update), Issue 4. DOI: 10.1002/14651858. CD004269.pub2.

21. Tinmouth AT & Freedman J. Prophylactic platelet transfusions: which dose is the best dose? A review of the literature. Transfus Med Rev 2003; 17(3): 181–193.

22. Simpson E, Lin Y, Stanworth S, Birchall J, Doree C & Hyde C. Recombinant factor VIIa for the prevention and treatment of bleeding in patients without haemophilia. Cochrane Database Syst Rev 2011, Issue 2, Article No.: CD005011. DOI: 10.1002/14651858.CD005011.pub3.

23. Watson D, Murdock J, Doree C et al. Blood transfusion administration – 1 or 2 person checks, which is the safest method? Transfusion 2008; 48(4): 783–789.

Further reading

Centre for Reviews and Dissemination. Systematic Reviews CRD's guidance for undertaking reviews in healthcare. CRD, University of York; 2009.

Egger M, Davey Smith G & Altman DG. Systematic Reviews in Health Care. Meta-analysis in Context, 2nd edn. London: BMJ Publishing Group; 2001.

Guyatt GH & Rennie D. Users' Guide to the Medical Literature: Essentials of Evidence-Based Clinical Practice. Chicago, IL: American Medical Association; 2002.

Heddle NM. Evidence-based clinical reporting: a need for improvement. Transfusion 2002; 42: 1106–1110.

Higgins JPT & Green S (eds). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from: www.cochrane-handbook.org.

Hyde CJ, Stanworth SJ & Murphy MF. Can you see the wood for the trees! Making sense of the forest plot. 1. Presentation of the data from the included studies. Transfusion 2008; 48(2): 218–220.

Hyde CJ, Stanworth SJ & Murphy MF. Can you see the wood for the trees! Making sense of the forest plot. 2. Analysis of the combined results from the included studies. Transfusion 2008; 48(4): 580–583.

The Equator Network: Enhancing the Quality and Transparency of Health Research. Available at: http://www.equator-network.org/.