Core Topics in General and Emergency Surgery

Evidence-based practice in surgery

Kathryn A. Rigby and Jonathan A. Michaels

Introduction

 

Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best external clinical evidence from systematic research.1

The concept of evidence-based medicine (EBM) was introduced in the 19th century but has only flourished in the last few decades. Historically, its application to surgical practice can be traced back to the likes of John Hunter and the American Ernest Amory Codman, who both recognised the need for research into surgical outcomes in an attempt to improve patient care.

In mid-19th century Paris, Pierre-Charles-Alexander Louis used statistics to measure the effectiveness of bloodletting, the results of which helped put an end to the practice of leeching. Ernest A. Codman began work as a surgeon in 1895 in Massachusetts. His main area of interest was the shoulder and he became a leading expert in this topic as well as being instrumental in the founding of the American College of Surgeons. He developed his ‘End Result Idea’, a notion that all hospitals should follow up every patient it treats ‘long enough to determine whether or not its treatment is successful and if not, why not?’ in order to prevent similar failures in the future.2 Codman also developed the first registry of bone sarcomas.

In the UK, one of the most important advocates of EBM was Archie Cochrane. His experiences in the prisoner of war camps, where he conducted trials in the use of yeast supplements to treat nutritional oedema, influenced his belief in reliable and scientifically proven medical treatment. In 1972 he published his book Effectiveness and Efficiency. Cochrane advocated the use of the randomised controlled trial (RCT) as the gold standard in the research of all medical treatment and, where possible, systematic reviews of these trials. One of the first systematic reviews of RCTs was of the use of corticosteroid therapy to improve lung function in threatened premature birth. Although RCTs had been conducted in this area, the message of the results was not clear from the individual studies, until the review overwhelmingly showed that corticosteroids reduced both neonatal morbidity and mortality. Had a systematic review been conducted earlier, then the lives of many babies could have been saved, as the review clearly showed that this inexpensive treatment reduced the chance of these babies dying from complications of immaturity by 30–50%.3 In 1992, as part of the UK National Health Service (NHS) Research and Development (R&D) Programme, the Cochrane Collaboration was founded.

Subsequently, in 1995, the first centre for EBM in the UK was established at the Nuffield Department of Clinical Medicine, University of Oxford. The driving force behind this was the American David Sackett, who had moved to a new Chair in Clinical Epidemiology in 1994 from McMaster University in Canada, where he had pioneered self-directed teaching for medical students.

From these roots, interest in EBM has exploded. The Cochrane Collaboration is rapidly expanding, with review groups in many fields of medicine and surgery. EBM is not limited only to hospital-based medicine but is increasingly seen in nursing, general practice and dentistry, and there are many new evidence-based journals appearing.

While clinical experience is invaluable, the rapidly changing world of medicine means that clinicians must keep abreast of new advances and, where appropriate, integrate research findings into everyday clinical practice. Neither research nor clinical experience alone is enough to ensure high-quality patient care; the two must complement each other. Sackett et al. identified five steps that should become part of day-to-day practice and in which a competent practitioner should be proficient:4

  1. to convert information needs into answerable questions;
  2. to be able to track down efficiently the best evidence with which to answer them (be it evidence from clinical examination, the diagnostic laboratory, research evidence or other sources);
  3. to be able to appraise that evidence critically for its validity and usefulness;
  4. to apply the results of this appraisal in clinical practice;
  5. to evaluate performance.

This chapter discusses the steps that are necessary to identify, critically appraise and combine evidence, to incorporate the findings into clinical guidance, and to implement and audit any necessary changes in order to move towards EBM in surgery. Many of the organisations and information sources that are relevant to EBM are specific to a particular setting. Therefore, the emphasis in this chapter is on the health services within the UK, although there are comparable arrangements and bodies in many other countries. Links to a number of these are given in the Internet resources described at the end of the chapter.

The need for evidence-based medicine

In 1991, there was still a widely held belief that only a small proportion of medical interventions were supported by solid scientific evidence.5 Jonathan Ellis and colleagues, on behalf of the Nuffield Department of Clinical Medicine, conducted a review of treatments given to 109 patients on a medical ward.6 The treatments were then examined to assess the degree of evidence supporting their use. The authors concluded that 82% of these treatments were in fact evidence based. However, they did suggest that similar studies should be conducted in other specialities. The importance of evidence-based health care in the NHS was formally acknowledged in two government papers, The new NHS7 and A first class service.8 These led to the development of the National Service Frameworks and the National Institute of Clinical Excellence (NICE).

In surgery there is a limited body of evidence from high-quality RCTs. For an RCT to be ethical there needs to be a clinical equipoise. That is, there needs to be a sufficient level of uncertainty about an intervention before a trial can be considered. For example, it would be unethical to conduct an RCT in the use of burr holes for extradural haematomas, because the observational data alone are so overwhelming as to the high degree of effectiveness that it would be unethical to deny someone a burr hole to prove the point.

Many surgeons feel unhappy with having to explain to a patient that there is clinical uncertainty about a treatment, as patients have historically put their trust in surgeons' hands. This reluctance to perform RCTs and the belief that they would be difficult to carry out has led to practices that are poorly supported by high-quality evidence. For example, there is widespread use of radical prostatectomy to treat localised prostatic carcinoma in the USA, despite a distinct lack of evidence to support this procedure.9

New technologies in surgery may be driven into widespread use by market forces, patients' expectations and clinicians' desire to improve treatment options. For example, with laparoscopic surgery, many assumed that it must be ‘better’ because it made smaller holes, there was less pain involved and therefore patients left hospital sooner. It was only after many hospitals had instituted its use that concerns were raised about its real benefits and the adequacy of training in the new technology. In 1996, a group of surgeons from Sheffield published a randomised, prospective, single-blind study that compared small-incision open cholecystectomy with laparoscopic cholecystectomy.10 They demonstrated that in their hands the laparosopic technique offered no real benefit over a mini-cholecystectomy in terms of the postoperative recovery period, hospital stay and time off work, but it took longer to perform and was more expensive.10 There were, however, other factors that may have influenced the results from this study, including surgeon experience, and mini-cholecystectomy has not been widely adopted.

The MRC Laparoscopic Groin and Hernia Trial Group undertook a large multicentre randomised comparison between laparoscopic and open repair of groin hernias.11 The results demonstrated that the laparoscopic procedure was associated with an earlier return to activities and less groin pain 1 year after surgery but it was also associated with more serious surgical complications, an increased recurrence rate and a higher cost to the health service. They suggested that laparoscopic hernia surgery should be confined to specialist surgical centres. NICE have since published guidelines which recommend that laparoscopic surgery is now one of the treatment options for the repair of inguinal hernias.

Some would argue that surgery, unlike drug trials, is operator dependent and that operating experience and skill can affect the outcome of an RCT, and cite this as a reason for not undertaking surgical trials. Although operator factors can introduce bias into a trial, the North American Symptomatic Carotid Endarterectomy Trial has shown that such problems can largely be overcome through appropriate trial design.12 Only surgeons who had been fully trained in the procedure, and who already had a proven low complication rate, were accepted as participants in the trial.

These examples illustrate a clear need for high-quality research to be undertaken into any new technology to assess both its efficacy and its cost-effectiveness before it is introduced into the healthcare system.

However, concerns have been raised about EBM. Sceptics have suggested that it may undermine clinical experience and instinct and replace it with ‘cookbook medicine’ or that it may ignore the elements of basic medical training such as history-taking, physical examination, laboratory investigations and a sound grounding in pathophysiology. Another fear is that purchasers and managers will use it as a means to cut costs and manage budgets.

Nevertheless, EBM can formalise our everyday procedures and highlight problems. It can provide answers by ensuring that the best use is made of existing evidence or it can identify areas in which new research is needed. Although it has a role in assessing the cost-effectiveness of an intervention, it is not a substitute for rationing and often results in practice that, despite being more cost-effective, has greater overall cost.13

The process of evidence-based medicine

EBM requires a structured approach to ensure that clinical interventions are based upon best available evidence. The first stage is always to pose a clinically relevant question for which an answer is required. Such a question should be clear, specific, important and answerable. One way of formulating questions is to think of them as having four key elements (PICO):

  • the population to whom the question applies;
  • the intervention of interest (and any other interventions with which it is to be compared);
  • the comparison (the main alternative);
  • the outcome of interest.

Therefore, the question ‘What is the best treatment for cholecystitis?’ needs to be much more clearly formulated if an adequate, evidence-based approach is to be used. A much better question would be ‘For adult patients admitted to hospital with acute cholecystitis (the population), does early open cholecystectomy, laparoscopic cholecystectomy (the interventions) or best medical management (the comparison) produce the lowest mortality, morbidity and total length of stay in hospital (the outcomes)?’ Even this may require more refinement to define further the exact interventions and outcomes of interest.

Once such a question has been clearly defined, a number of further stages of the process can follow:

  1. Relevant sources of information must be searched to identify all available literature that will help in answering the question.
  2. Published trials must be critically appraised to assess whether they possess internal and external validity in answering the question posed (internal validityis where the effects within the study are free from bias and confounding; external validity is where the effects within the study apply outside the study and the results are therefore generalisable to the population in question).
  3. Where relevant, a systematic review and meta-analysis may be required to provide a clear answer from a number of disparate sources.
  4. The answers to the question need to be incorporated into clinical practice through the use of guidelines or through other methods of implementation.
  5. Adherence to ‘best practice’ needs to be monitored through audit, and the process needs to be kept under review in order to take account of new evidence or clinical developments.

Sources of evidence

Once a question has been formulated, the next step in undertaking EBM is the identification of all the relevant evidence. The first line for most practitioners is the use of journals. Many clinicians will subscribe to specific journals in their own specialist area and have access to many others through local libraries. However, the vast increase in the number of such publications makes it impossible for an individual to access or read all the relevant papers, even in a highly specialist area.

There has been a huge expansion in the resources that are available for identifying relevant material from other publications, including indexing and abstracting services such as MEDLINE (computerised database compiled by the US National Library of Medicine) and EMBASE. There is also a rapidly expanding set of journals and other services that provide access to selected, appraised and combined results from primary information sources.

As a result, the information sources that provide the evidence to support EBM are vast and include the following:

  • Media – journals, online databases, CD-ROMs and the Internet.
  • Independent organisations – research bodies and the pharmaceutical industry.
  • Health services – purchasers and providers at local, regional and national levels.
  • Academic units.

Some of these are described in more detail below and the Appendix to this chapter provides a list of contact details for further information.

Journals

The following are a selection of journals that act as secondary sources, identifying and reviewing other research that is felt to be of key importance to evidence-based practice.

Evidence-based Medicine

This was first launched in October 1995, by the British Medical Journal (BMJ) Publishing Group. It systematically searches high-quality international journals and provides summaries of the most clinically relevant research articles. The validity of the research is critically appraised by experts and assessed for its clinical applicability. This consequently allows the reader to keep up with the latest advances in clinical practice. It also publishes articles relating to the study and practice of EBM.

Evidence-based Nursing

This follows similar lines to Evidence-based Medicine, but contains articles more relevant to the nursing field.

Evidence-based Mental Health

This is produced by the BMJ Publishing Group in collaboration with the Royal College of Psychiatrists and British Pyschological Society.

Internet Resources

The Internet is becoming an increasingly useful source of medical information and evidence. Details of Internet addresses for many of the sources referred to below are given in the Appendix to this chapter, although this is a rapidly progressing and changing area. There are many journals and databases that are available either free or through subscription, and dedicated search engines such as Google Scholar. This medium also provides a number of advantages over printed material, including ease of searching, hyperlinks to other sources, access to additional supporting materials or raw data and the provision of discussion groups. There are, however, potential problems with the Internet in that there is no quality control and much of the available material is of dubious quality, or published by those with particular commercial or other interests.

NHS Evidence

This is a new service that provides online access to evidence-based information. It is managed by NICE and is free to use. It has access to NICE pathways, journals and databases, ebooks and the Cochrane library.

BMJ Evidence Centre

This provides information, resources and tools that aid evidence-based practice. It has access to sites that target EBM in relation to patient care, research and patient information, and has updates on current evidence and treatment options.

Academic Units

Cochrane Collaboration

As described above, the British epidemiologist who inspired this collaboration realised that in order to make informed decisions about healthcare, reliable evidence must be accessible and kept up to date with any new evidence. It was felt that failure to achieve this might result in important developments in healthcare being overlooked. This was to be a key aspect in providing the best healthcare possible for patients. It was also hoped that by making clear the result of an intervention, then work would not be duplicated.

The Cochrane library is the electronic publication of the Cochrane Collaboration and it includes six databases:

  • The Cochrane Database of Systematic Reviews contains systematic reviews and protocols of reviews in preparation. These are regularly updated and there are facilities for comments and criticisms along with authors' responses.
  • The Cochrane Central Register of Controlled Trials is the largest database of RCTs. Information about trials is obtained from several sources including searches of other databases and hand searching of medical journals. It includes many RCTs not currently listed in databases such as MEDLINE or EMBASE.
  • The Database of Abstracts of Reviews of Effectiveness (DARE) contains abstracts of reviews that have been critically appraised by peer reviewers. These reviews evaluate the effects of healthcare interventions and the delivery and organisations of health services.
  • The Cochrane Review Methodology Register is a bibliography of articles on the science of research synthesis.
  • Health Technology Assessment Database (HTA) – see below.
  • NHS Economic Evaluation Database (NHS EED) – see below.

The Reviewers' Handbook includes information on the science of reviewing research and details of the review groups. It is also available in hard copy.14

The Cochrane library is regularly updated and amended as new evidence is acquired. It is distributed on disk, CD-ROM and the Internet.3 In order to allow the results of the reviews to be widely used, no one contributor has exclusive copyright of the review.

Centre for Evidence-based Medicine

The Centre for Evidence-based Medicine was established in Oxford. Its chief remit is to promote EBM and Evidence-Based Practice (EBP) in healthcare. It runs workshops and courses in both the practice and teaching of EBM. It also conducts research and development on improving EBP and its website also has many free EBM resources and tools.

Review Body for Interventional Procedures (ReBIP)

This is a joint venture between the Health Services Research Unit at Sheffield University and Aberdeen University. It works under the auspices of NICE's Interventional Procedures Programme (IPP). When there is doubt about the safety and efficacy of any procedure they will be commissioned to provide a systematic review or gather additional data.

NHS Agencies

Centre for Reviews and Dissemination (CRD)

The CRD was established in January 1994 at the University of York and is now also part of the National Institute for Health Research (NIHR). It is funded by NIHR England, the Department of Health, Public Health Agency, Northern Ireland, and the National Institute for Social Care and Health Research, Welsh Assembly Government. The CRD concentrates specifically on areas of priority to the NHS. It is designed to raise the standards of reviews within the NHS and to encourage research by working with healthcare professionals. It undertakes and disseminates systematic reviews and maintains three databases:

  • NHS EED contains mainly abstracts of economic evaluations of healthcare interventions and assesses the quality of the studies, stating any practical implications to the NHS.
  • DARE (see above).
  • The Health Technology Assessment (HTA) database details completed and ongoing HTAs from around the world. The contents of this database have not been critically appraised.

NIHR Health Technology Assessment Programme

The HTA is now part of National Institute for Health Research (NIHR). It commissions independent research into high-priority areas. This includes many systematic reviews and primary research in key areas. The programme publishes details of ongoing HTA projects and monographs of completed research.

Critical appraisal

 

This is the process by which we assess the evidence presented to us in a paper. We need to be critical of it in terms of its validity and clinical applicability.

From reading the literature, it is evident that there may be many trials on the same subject, which may all draw different conclusions. Which one should be believed and allowed to influence clinical practice? We owe a duty to our patients to be able to assess accurately all the available information and judge each paper on its own merits before changing our clinical practice accordingly.

Randomised Controlled Trials

The RCT is a comparative evaluation in which the interventions being compared are allocated to the units being studied purely by chance. It is the ‘gold standard’ method of comparing the effectiveness of different interventions.15 Randomisation is the only way to allow valid inferences of cause and effect,16 and no other study design can potentially protect as well against bias.

Unfortunately, not all clinical trials are done well, and even fewer are well reported. Their results may therefore be confusing and misleading, and it is necessary to consider several elements of a trial's design, conduct and conclusions before accepting the results. The first requirement is that there must be sufficient detail available to make such an assessment.

It became clear that there was a need for the presentation of clinical trials to be standardised. The CONSORT (Consolidated Standards of Reporting Trials) statement was developed. The most recent version is CONSORT 2010 (Table 1.1).17,18

 

The CONSORT 2010 statement lists 25 items that should be included in any trial report, along with a flow chart.17,18

Table 1.1

CONSORT 2010 checklist of information to include when reporting a randomised trial*

*We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation and Elaboration for important clarifications on all the items. If relevant, we also recommend reading CONSORT extensions for cluster randomised trials, non-inferiority and equivalence trials, non-pharmacological treatments, herbal interventions, and pragmatic trials. Additional extensions are forthcoming: for those and for up-to-date references relevant to this checklist, see www.consort-statement.org.

Reproduced from Schulz KF, Altman DG, Moher D et al. Br Med J 2010; 340:c332 and Moher D, Hopewell S, Schulz KF et al. Br Med J 2010;340:c869. With permission from the BMJ Publishing Group Ltd.

Many journals now encourage authors to submit a copy of the CONSORT statement relating to their paper. A similar checklist has been proposed for the reporting of observational studies (cohort, case–control and cross-sectional). This is called the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement.19 The QUORUM statement is a similar checklist that has been developed to improve the quality of reporting relating to systematic reviews of RCTs.20

 

The STROBE statement provides a recommended checklist for the reporting of observational studies19 and the QUORUM statement provides similar recommendations for systematic reviews of RCTs.20

The Critical Appraisal Skills Programme (CASP) is a UK-based project designed to develop appraisal skills about effectiveness. It provides half-day workshops and has developed appraisal frameworks based on 10 or 11 questions for RCTs, qualitative research and systematic reviews.

Assuming that the relevant information is available, critical appraisal is required to ensure that the methodology of the trial is such that it will minimise effects on outcome other than true treatment effects, i.e. those owing to chancebias and confounding:

  • Chance – random variation, leading to imprecision.
  • Bias – systematic variation leading to inaccuracy.
  • Confounding – systematic variation resulting from the existence of extraneous factors that affect the outcome and have distributions that are not taken into account, leading to bias and invalid inferences.

All good study designs will reduce the effects of chance, eliminate bias and take confounding into account. This requires consideration of many aspects of trial design, including methods of randomisation, blinding and masking, analysis methods and sample size. It also requires the reviewer to consider aspects such as sponsorship and vested interests that may introduce sources of bias. Discussion of methodology for the critical appraisal of RCTs and other forms of study is readily available elsewhere.21

Systematic literature reviews

 

A systematic review is an overview of primary studies carried out to an exhaustive, defined and repeatable protocol.

There has been an explosion in the published medical literature, with over two million articles a year published in 20 000 journals. The task of keeping up with new advances in medical research has become quite overwhelming. We have also seen that the results of trials in the same subject may be contradictory, and that the underlying message can be masked. Systematic reviews are designed to search out meticulously all relevant studies on a subject, evaluate the quality of each study and assimilate the information to produce a balanced and unbiased conclusion.22

One advantage of a systematic review with a meta-analysis over a traditional subjective narrative review is that by synthesising the results of many smaller studies, the original lack of statistical power of each study may be overcome by cumulative size, and any treatment effect is more clearly demonstrated. This, in turn, can lead to a reduction in delay between research advances and clinical implementation. For example, it has been demonstrated that if the original studies done on the use of anticoagulants after myocardial infarction had been reviewed, their benefits would have been apparent much earlier.23,24 It is obviously essential that both the benefit or any harm caused by an intervention becomes apparent as soon as possible.

Unfortunately, as in reported trials, not all reviews are as rigorously researched and synthesised as one would hope and are open to similar pitfalls as RCTs. The Cochrane Collaboration has sought to rectify this and has worked upon refining the methods used for systematic reviews. It has consequently produced some of the most reliable and useful reviews, and its methods have been widely adopted by other reviewers. The Cochrane Collaboration advises that each review must be based on an explicit protocol, which sets out the objectives and methods so that a second party could reproduce the review at a later date if required.

Because of the increasing importance of systematic reviews as a method of providing the evidence base for a variety of clinical activities, the methods are discussed in some detail below. There are several key elements in producing a systematic review.

1 Develop A Protocol For A Clearly Defined Question

Within a protocol:

  • the objectives of the review of the RCTs must be stated;
  • eligibility criteria must be included (e.g. relevant patient groups, types of intervention and trial design);
  • appropriate outcome measures should be defined.

In the Cochrane Collaboration, each systematic review is preceded by a published protocol that is subjected to a process of peer review. This helps to ensure high quality, avoids duplication of effort and is designed to reduce bias by setting standards for inclusion criteria before the results from identified studies have been assessed.

2 Literature Search

All published and unpublished material should be sought. This includes examining studies in non-English journals, grey literature, conference reports, company reports (drug companies can hold a lot of vital information from their own research) and any personal contacts, for personal studies or information. The details of the search methodology and search terms used should be specified in order to make the review reproducible and allow readers to repeat the search to identify further relevant information published after the review. The most frequently used initial source of information is MEDLINE but this does have limitations. It only indexes about one-third of all medical articles that exist in libraries (over 10 million in total),25 and an average search by a regular user would only yield about one-fifth of the trials that can be identified by more rigorous techniques for literature searching.26 It also has a bias towards articles published in English. Other electronic and indexed databases should also be searched, but often the only way to ensure that the maximum number of relevant trials are found, wherever published and in whatever language, is to hand search the journals. This is one of the tasks of the Cochrane Collaboration through a database maintained at the Baltimore Cochrane Centre.

One must also be aware, however, that there is a potential for ‘publication bias’. Trials that are more likely to get published are those with a positive result rather than a negative or no-effect result,27 and are also more likely to be cited in other articles.28

3 Evaluating The Studies

Each trial should be assessed to see if it meets the inclusion criteria set out in the protocol (eligibility). If it meets the required standards, then the trial is subjected to a critical appraisal, ideally by two independent reviewers, to ascertain its validity, relevance and reliability. Any exclusions should be reported and justified; if there is missing information from the published article, it may be necessary to attempt to contact the author of the primary research. Reviewers should also, if possible, be ‘blinded’ to the authors and journals of publication, etc. in order to minimise any personal bias.

The Cochrane reviewers are assisted in all these tasks by instructions in the Cochrane Handbook14 and through workshops at the Cochrane Centres.29

4 Synthesis Of The Results

Once the studies have been graded according to quality and relevance, their results may be combined in an interpretative or a statistical fashion. It must be decided if it is appropriate to combine some studies and which comparisons to make. Subgroup or sensitivity analyses may also be appropriate. The statistical analysis is called a meta-analysis and is discussed below.

5 Discussion

The review should be summarised. The aims, methods and reported results should be discussed and the following issues considered:

  • quality of the studies;
  • possible sources of heterogeneity (reasons for inconsistency between studies, e.g. patient selection, methods of randomisation, duration of follow-up or differences in statistical analysis);
  • bias;
  • chance;
  • applicability of the findings.

As with any study, a review can be done badly, and the reader must critically appraise a review to assess its quality. Systematic errors may be introduced by omitting some relevant studies, by selection bias (such as excluding foreign language journals) or by including inappropriate studies (such as those considering different patient groups or irrelevant outcomes). Despite all precautions, the findings of a systematic review may differ from those of a large-scale, high-quality RCT. This will be discussed below in relation to meta-analysis.

Meta-analysis

 

A meta-analysis is a specific statistical strategy for assembling the results of several studies into a single estimate, which may be incorporated into a systematic literature review.30

Here we must make the distinction that the term ‘meta-analysis’ refers to the statistical techniques used to combine the results of several studies and is not synonymous with systematic review, as it is sometimes used.

A common problem in clinical trials is that the results are not clear-cut, either because of size or because of the design of the trial. The systematic review is designed to eliminate some of these problems and give appropriate weightings to the best- and worst-quality studies, regardless of size. Meta-analysis is the statistical tool used to combine the results and give ‘power’ to the estimates of effect.

Meta-analyses use a variety of statistical techniques according to the type of data being analysed (dichotomous, continuous or individual patient data).14 There are two main models used to analyse the results: the fixed-effect model (logistic regression, Mantel–Haenszel test and Peto's method) and the random-effect model. The major concern with fixed-effect methods is that they assume no clinical heterogeneity between the individual trials, and this may be unrealistic.31 The random-effect method takes into consideration random variation and clinical heterogeneity between trials. In the presentation of meta-analysis, a consistent scale should be chosen for measuring treatment effects and to cope with the possible large scale of difference in proportions, risk ratios or odds ratios that can be used.

Heterogeneity

Trials can have many different components21 and therefore a meta-analysis is only valid if the trials that it seeks to summarise are homogeneous: you cannot add apples and oranges.32 If trials are not comparable and any heterogeneity is ignored, the analysis can produce misleading results.

Figure 1.1 shows an example of this from a meta-analysis of 19 RCTs investigating the use of endoscopic sclerotherapy to reduce mortality from oesophageal varices in the primary treatment of cirrhotic patients.33 Each trial is represented by a ‘point estimate’ of the difference between the groups, and a horizontal line showing the 95% confidence interval (CI). If the line does not cross the line of no effect, then there is a 95% chance that there is a real difference between the groups. It can be seen that in this case the trials are not homogeneous as some of the lower limits of the CIs are above the highest limits of CIs in other trials. Such a lack of homogeneity may have a variety of causes, relating to clinical heterogeneity (differences in patient mix, setting, etc.) or differences in methods. The degree of statistical heterogeneity can be measured to see if it is greater than is compatible with the play of chance.34 Such a statistical tool may lack statistical power; consequently, results that do not show significant heterogeneity do not necessarily mean that the trials are truly homogeneous and one must look beyond them to assess the degree of heterogeneity.

 

‘Meta-analysis is on the strongest ground when the methods employed in the primary studies are sufficiently similar that any differences in their results are due to the play of chance.’30

FIGURE 1.1 An example of a meta-analysis of 19 randomised controlled trials investigating the use of endoscopic sclerotherapy to reduce mortality from oesophageal varices in the primary treatment of cirrhotic patients.Reproduced from Chalmer I, Altman DG. Systematic reviews. London: BMJ Publishing, 1995; p. 119. With permission from Blackwell Publishing Ltd.

Views on the usefulness of meta-analyses are divided. On the one hand, they may provide conclusions that could not be reached from other trials because of the small numbers involved. However, on the other hand, they have some limitations and cannot produce a single simple answer to all complex clinical problems. They may give misleading results if used inappropriately where there is a biased body of literature or clinical or methodological heterogeneity. If used with caution, however, they may be a useful tool in providing information to help in decision-making.

Figure 1.2 shows a funnel plot of a meta-analysis relating to the use of magnesium following myocardial infarction.35 The result of each study in the analysis is represented by a circle plotting the odds ratio (with the vertical line being at 1, the ‘line of no effect’) against the trial size. The diamond represents the overall results of the meta-analysis with its pooled data from all the smaller studies shown. This study36 was published in 1993 and showed that it was beneficial and safe to give intravenous magnesium in patients with acute myocardial infarction. The majority of the studies involved show a positive effect of the treatment, as does the meta-analysis. However, the results from this study were contradicted in 1995 by ISIS-4, a very large RCT involving 58 050 patients.37 It had three arms, in one of which intravenous magnesium was given to patients suspected of an acute myocardial infarction. The results are marked on the funnel plot and show that there is no clear benefit for this treatment, contrary to the results of the earlier meta-analysis.

FIGURE 1.2 A funnel plot of a meta-analysis relating to the use of magnesium following myocardial infarction. Points indicate values from small and medium-sized trials; the diamond is the combined odds ratio with 95% confidence interval from the meta-analysis of these trials and the square is that for a mega trial. Reproduced from Egger M, Smith GD. Misleading meta-analysis. Br Med J 1995; 310:752–4. With permission from the BMJ Publishing Group Ltd.

Some would say that this is one of the major problems with using statistical synthesis. An alternative viewpoint is that it is an example of the importance of ensuring that the material fed into a meta-analysis from a systematic review is researched and critically appraised to the highest possible standard. Explanations for the contradictory findings in this review have been given as:32,35

  • publication bias, since only trials with positive results were included (see funnel plot);
  • methodological weakness in the small trials;
  • clinical heterogeneity.

Clinical guidelines

 

Clinical guidelines are systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances.38

EBM is increasingly advocated in healthcare, and evidence-based guidelines are being developed in many areas of primary healthcare such as asthma,39 stable angina40 and vascular disease.41 Over 2000 guidelines or protocols have been developed from audit programmes in the UK alone. An observational study in general practice has also shown that recommendations that are evidence based are more widely adopted than those that are not.42 The UK Department of Health has also endorsed the policy of using evidence-based guidelines.43

Guidelines may have a number of different purposes:

  • to provide an answer to a specific clinical question using evidence-based methods;
  • to aid a clinician in decision-making;
  • to standardise aspects of care throughout the country, providing improved equality of access to services and enabling easier comparisons to be made for audit and professional assessment (a reduction in medical practice variation);
  • to help to make the most cost-effective use of limited resources;
  • to facilitate education of patients and healthcare professionals.

For clinical policies to be evidence based and clinically useful, there must be a balance between the strengths and limitations of relevant research and the practical realities of the healthcare and clinical settings.44

There are, however, commonly expressed concerns about the use of guidelines:

  • There is a worry that the evidence used may be spurious or not relevant, especially in areas where there is a paucity of published evidence.
  • Guidelines may not be applicable to every patient and are therefore only useful in treating diseases and not patients.
  • Clinicians may feel that they take away their autonomy in decision-making.
  • A standardised clinical approach may risk suffocating any clinical flair and innovation.
  • There may be geographic or demographic limitations to the applicability of guidelines; for instance, a policy developed for use in a city district may not be transferable to a rural area.

The effectiveness of a guideline depends on three areas, as identified by Grimshaw and Russell:45

  1. How and where the guidelines are produced (development strategy) – at a local, regional or national level or by a group internal or external to the area.
  2. How the guidelines have been disseminated, e.g. a specific education package, group work, publication in a journal or a mailed leaflet.
  3. How the guidelines are implemented (put into use).

In the UK, there are a number of bodies that produce guidelines and summaries of evidence-based advice.

The National Institute For Clinical Excellence (NICE)

NICE is a special health authority formed on 1 April 1999 by the UK government. The board comprises executive and non-executive members. It is designed to work with the NHS in appraising healthcare interventions and offering guidance on the best treatment methods for patients. It assesses all the evidence on the clinical benefit of an intervention, including quality of life, mortality and cost-effectiveness. It will then decide, using this information if the intervention should be recommended to the NHS.

It produces guidance in three main areas:

  • public health;
  • health technologies – including the newly developed Medical Technologies Evaluation Programme (MTEP) and Diagnostic Assessment Programme (DAP);
  • clinical practice.

Its role was further expanded in 2010 following the NHS White Paper, Equity and Excellence – Liberating the NHS and was tasked with developing 150 quality standards in key areas in order to improve patient outcomes. It is now linked in with NHS Evidence and manages the online search engine that allows easy access to an extensive evidence base and examples of best practice.

Scottish Intercollegiate Guidelines Network (SIGN)

The SIGN was formed in 1993. Its objective is to improve the effectiveness and efficiency of clinical care for patients in Scotland by developing, publishing and disseminating guidelines that identify and promote good clinical practice. SIGN is a network of clinicians from all the medical specialities, nurses, other professionals allied to medicine, managers, social services and researchers. Patients and carers are also represented on the council. Since 2005, SIGN has been part of NHS Quality Improvement Scotland.

Effective Practice And Organisation Of Care (EPOC)

EPOC is a subgroup of the Cochrane Collaboration that reviews and summarises research about the use of guidelines.

Guidelines also need to be critically appraised and a framework has been developed for this46 that uses 37 questions to appraise three different areas of a clinical guideline:

  1. rigour of development;
  2. content and context;
  3. application.

Integrated care pathways (ICPs)

ICPs are known by a number of names, including integrated care plans, collaborative care plans, critical care pathways and clinical algorithms. ICPs are a development of clinical practice guidelines and have emerged over recent years as a strategy for delivering consistent high-quality care for a range of diagnostic groups or procedures. They are usually multidisciplinary, patient-focused pathways of care that provide a framework for the management of a clinical condition or procedure and are based upon best available evidence.

The advantage of ICPs over most conventional guidelines is that they provide a complete package of protocols relating to the likely events for all healthcare personnel involved with the patient during a single episode of care. By covering each possible contingency with advice based upon best evidence, they provide a means of both identifying and implementing optimum practice.

Grading the evidence

There is a traditional hierarchy of evidence, which lists the primary studies in order of perceived scientific merit. This allows one to give an appropriate level of significance to each type of study and is useful when weighing up the evidence in order to make a clinical decision. One version of the hierarchy is given in Box 1.1.21 It must be remembered, however, that this is only a rough guide and that one needs to assess each study on its own merits. Although a meta-analysis comes above an RCT in the hierarchy, a good-quality RCT is far better than a poorly performed meta-analysis. Similarly, a seriously flawed RCT may not merit the same degree of importance as a well-designed cohort study. Checklists have been published that may assist in assessing the methodological quality of each type of study.21

 

Box 1.1

Hierarchy of evidence

  1. Systematic reviews and meta-analyses
  2. Randomised controlled trials with definitive results (the results are clinically significant)
  3. Randomised controlled trials with non-definitive results (the results have a point estimate that suggests a clinically significant effect)
  4. Cohort studies
  5. Case–control studies
  6. Cross-sectional studies
  7. Case reports

From Greenhalgh T. How to read a paper: the basics of evidence based medicine. London: BMJ Publications, 1997; Vol. xvii, p. 196. With permission from the BMJ Publishing Group Ltd.

Similar checklists are available for systematic reviews.21,47,48 As already discussed, the preparation of a systematic review is a complex process involving a number of steps, each of which is open to bias and inaccuracies that can distort the results. Such lists can be used as a guide when preparing a review as well as in assessing one. One checklist used to assess the validity of a review does so by identifying potential sources of bias in each step (Table 1.2).49

Table 1.2

Checklist for assessing sources of bias and methods of protecting against bias

Source

Check

Problem formulation

Is the question clearly focused?

Study identification

Is the search for relevant studies thorough?

Study selection

Are the inclusion criteria appropriate?

Appraisal of the studies

Is the validity of the studies included adequately assessed?

Data collection

Is missing information obtained from investigators?

Data synthesis

How sensitive are the results to changes in the way the review is done?

Interpretation of results

Do the conclusions flow from the evidence that is reviewed? Are recommendations linked to the strength of the evidence?

 

Are judgments about preferences (values) explicit?

 

If there is ‘no evidence of effect’, is care taken not to interpret this as ‘evidence of no effect’?

 

Are subgroup analyses interpreted cautiously?

From Oxman A. Checklists for review articles. Br Med J 1994; 309:648–51. With permission from the BMJ Publishing Group Ltd.

It is hoped that the results of a systematic review will be precise, valid and statistically powerful in order to provide the highest quality information on which to base clinical decisions or to produce clinical guidelines. The strength of the evidence provided by a study also needs to be assessed before making any clinical recommendations. A grading system is required to specify the levels of evidence, and several have previously been reported (e.g. those of the Antithrombotic Therapy Consensus Conference50 or that shown in Table 1.3).

Table 1.3

Agency for Health Care Policy and Research grading system for evidence and recommendations

From Hadorn DC, Baker D, Hodges JS et al. Rating the quality of evidence for clinical practice guidelines. J Clin Epidemiol 1996 49:749–54. With permission from Elsevier.

The grading of evidence and recommendations within textbooks, clinical guidelines or ICPs should allow users easily to identify those elements of evidence that may be subject to interpretation or modification in the light of new published data or local information. It should identify those aspects of recommendations that are less securely based upon evidence and therefore may appropriately be modified in the light of patient preferences or local circumstances. This raises different issues to the grading of evidence for critical appraisal and for systematic reviews.

In 1979, the Canadian Task Force on the Periodic Health Examination was one of the first groups to propose grading the strength of recommendations.51 Since then there have been several published systems for rating the quality of evidence, although most were not designed specifically to be translated into guideline development. The Agency for Health Care Policy and Research has published such a system, although this body considered that its level of classification may be too complex to allow clinical practice guideline development.52Nevertheless, the Agency advocated evidence-linked guideline development, requiring the explicit linkage of recommendations to the quality of the supporting evidence. The Centre for Evidence-based Medicine has developed a more comprehensive grading system, which incorporates dimensions such as prognosis, diagnosis and economic analysis.53

These systems are complex; for textbooks, care pathways and guidelines, such grading systems need to be clear and easily understood by the relevant audience as well as taking into account all the different forms of evidence that may be appropriate to such documents.

Determining Strength Of Evidence

There are three main factors that need to be taken into account in determining the strength of evidence:

  • the type and quality of the reported study;
  • the robustness of the findings;
  • the applicability of the study to the population or subgroup to which the guidelines are directed.

Type and quality of study

Meta-analyses, systematic reviews and RCTs are generally considered to be the highest quality evidence that is available. However, in some situations these may not be appropriate or feasible. Recommendations may depend upon evidence from other kinds of study, such as observational studies of epidemiology or natural history, or synthesised evidence, such as decision analyses and cost-effectiveness modelling.

For each type of evidence, there are sets of criteria as to the methodological quality, and descriptions of techniques for critical appraisal are widely available.21 Inevitably, there is some degree of subjectivity in determining whether particular flaws or a lack of suitable information invalidates an individual study.

Robustness of findings

The strength of evidence from a published study would depend not only upon the type and quality of a particular study but also upon the magnitude of any differences and the homogeneity of results. High-quality research may report findings with wide confidence intervals, conflicting results or contradictory findings for different outcome measures or patient subgroups. Conversely, sensitivity analysis within a cost-effectiveness or decision analysis may indicate that uncertainty regarding the exact value of a particular parameter does not detract from the strength of the conclusion.

Applicability

Strong evidence in a set of guidelines must be wholly applicable to the situation in which the guidelines are to be used. For example, a finding from high-quality research based upon a hospital population may provide good evidence for guidelines intended for a similar setting but a lower quality of evidence for guidelines intended for primary care.

Grading System For Evidence

The following is a simple pragmatic grading system for the strength of a statement of evidence, which will be used to grade the evidence in this book (and the other volumes in the Companion series). Details of the definitions are given in Table 1.4. For practical purposes, only the following three grades are required, which are analogous to the levels of proof required in a court of law:

Table 1.4

Grading of evidence and recommendations

‘Beyond reasonable doubt’. Analogous to the burden of proof required in a criminal court case and may be thought of as corresponding to the usual standard of ‘proof’ within the medical literature (i.e. P < 0.05).

II ‘On the balance of probabilities’. In many cases, a high-quality review of literature may fail to reach firm conclusions because of conflicting evidence or inconclusive results, trials of poor methodological quality or the lack of evidence in the population to which the guidelines apply. Where such strong evidence does not exist, it may still be possible to make a statement as to the ‘best’ treatment on the ‘balance of probabilities’. This is analogous to the decision in a civil court where all the available evidence will be weighed up and a verdict will depend upon the ‘balance of probabilities’.

III ‘Unproven’. Where the above levels of proof do not exist.

All evidence-based guidelines require regular review because of the constant stream of new information that becomes available. In some areas, there is more rapid development and the emergence of new evidence; in these instances, relevant reference will be made to ongoing trials or systematic reviews in progress.

Grading Of Recommendations

Although recommendations should be based upon the evidence presented, it is necessary to grade the strength of recommendation separately from the evidence. For example, the lack of evidence regarding an expensive new technology may lead to a strong recommendation that it should only be undertaken as part of an adequately regulated clinical trial. Conversely, strong evidence for the effectiveness of a treatment may not lead to a strong recommendation for use if the magnitude of the benefit is small and the treatment very costly.

The following grades of recommendations are suggested and details of the definition are given in Table 1.4:

A A strong recommendation, which should be followed.

B A recommendation using evidence of effectiveness, but where there may be other factors to take into account in the decision-making process.

C A recommendation where evidence as to the most effective practice is not adequate, but there may be reasons for making the recommendations in order to minimise cost or reduce the chance of error through a locally agreed protocol.

Implementation of evidence-based medicine

Healthcare professionals have always sought evidence on which to base their clinical practice. Unfortunately, the evidence has not always been available, reliable or explicit, and when it was available it has not been implemented immediately. James Lancaster in 1601 showed that lemon juice was effective in the treatment of scurvy, and in 1747 James Lind repeated the experiment. The British Navy did not utilise this information until 1795 and the Merchant Navy not until 1865. When implementation of research findings is delayed, ultimately the people who suffer are the patients.

A number of different groups of people may need to be committed to the changes before they can take place with any degree of success. These include:

  • healthcare professionals (doctors, nurses, etc.);
  • healthcare providers and purchasers;
  • researchers;
  • patients and the public;
  • government (local, regional and national).

Each of these groups has a different set of priorities. To ensure that their own requirements are met by the proposal, negotiation is required, which takes time. There are many potential barriers to the implementation of recommendations, and clinicians may become so embroiled in tradition and dogma, that they are resistant to change. They may lack knowledge of new developments or the time and resources to keep up to date with the published literature. Lack of training in a new technology, such as laparoscopic surgery or interventional radiology, may thwart their use, even when shown to be effective. Researchers may become detached from the practicalities of clinical practice and the needs of the health service and concentrate on inappropriate questions or produce impractical guidelines. Managers are subject to changes in the political climate and can easily be driven by policies and budgets. The resources available to them may be limited and not allow for the purchase of new technology, and even potentially cost-saving developments may not be introduced because of the difficulties in releasing the savings from elsewhere in the service.

Patients and the general public can also influence the development of the healthcare offered. They are susceptible to the persuasion of the mass media and may demand the implementation of ‘miracle cures’ or fashionable investigations or treatments. Such interventions may not be practical or of any proven benefit. They can also determine the success or failure of a particular treatment. For instance, a treatment may be physically or morally unacceptable, or there may be poor compliance, especially with preventative measures such as diets, smoking cessation or exercise. All these aspects can lead to a delay in the implementation of research findings.

Potential ways of improving this situation include the following:

  • Provision of easy and convenient access to summaries of the best evidence, electronic databases, systematic reviews and journals in a clinical setting.
  • Development of better disease management systems through mechanisms such as clinical guidelines, ICPs and electronic reminders.
  • Implementation of computerised decision-support systems.
  • Improvement of educational programmes – practitioners must be regularly and actively apprised of new evidence rather than relying on the practitioner seeking it out; passive dissemination of evidence is ineffective.
  • More effective systems to encourage patients to adhere to treatment and general healthcare advice; the information must be clear, concise, correct and actively distributed.

There is a gap between research and practice, and there is a need for evidence about the effectiveness of different methods of implementing changes in clinical practice. The NHS Central R&D Committee set up an advisory group to look into this problem and identified 20 priorities for evaluation, as shown in Box 1.2.54

 

Box 1.2   Priority areas for evaluation in the methods of implementation of the findings of research: recommendations of the advisory group to the NHS Central Research and Development Committee

  1. Influence of source and presentation of evidence on its uptake by healthcare professionals and others
  2. Principal sources of information on healthcare effectiveness used by clinicians
  3. Management of uncertainty and communications of risk by clinicians
  4. Roles for health service users in implementing research findings
  5. Why some clinicians but not others change their practice in response to research findings
  6. Role of commissioning in securing change in clinical practice
  7. Professional, managerial, organisational and commercial factors associated with securing change in clinical practice, with particular focus on trusts and primary care providers
  8. Interventions directed at clinical and medical directors and directors of nursing trusts to promote evidence-based care
  9. Local research implementation and development projects
  10. Effectiveness and cost-effectiveness of audit and feedback to promote implementation of research findings
  11. Educational strategies for continuing professional development to promote the implementation of research findings
  12. Effectiveness and cost-effectiveness of teaching critical appraisal skills to clinicians, patients/users, purchasers and providers to promote uptake of research findings
  13. Role of undergraduate training in promoting the uptake of research findings
  14. Impact of clinical practice guidelines in disciplines other than medicine
  15. Effectiveness and cost-effectiveness of reminder and decision support systems to implement research findings
  16. Role of the media in promoting uptake of research findings
  17. Impact of professional and managerial change agents (including educational outreach visits and local opinion leaders) in implementing research findings
  18. Effect of evidence-based practice on general health policy measures
  19. Impact of national guidelines to promote clinical effectiveness
  20. Use of research-based evidence by policymakers

From NHS Central Research and Development Committee. Methods to promote the implementation of research findings in the NHS: priorities for evaluation: report to the NHS Central Research and Development Committee. London: Department of Health, 1995. © Crown copyright 2008. Reproduced under the terms of the Click-Use Licence.

An EPOC review has examined the different methods of implementing evidence-based healthcare and classified them into three broad groups:55

  • consistently effective – educational visits and interactive meetings;
  • sometimes effective – audit and feedback;
  • little or no effect – printed guideline distribution.

Several groups have looked at implementing evidence-based practice, such as grommet use in glue ear and steroids in preterm delivery:

  • PACE (Promoting Action on Clinical Effectiveness);56
  • PLIP (Purchaser Led Implementation Projects);57
  • GriPP (Getting Research into Practice and Purchasing).58

Successful implementation of research findings into practice appears to be due to a multipronged approach and the UK National Association of Health Authorities and Trusts (NAHAT) has produced an action checklist in order to facilitate this process.59

It must be remembered, however, that EBM is not the sole preserve of experts or clinicians. The research, dissemination and implementation of clinical and economic evaluations have wide-reaching repercussions for the health service. Managers are under increasing pressure to be effective both clinically and for costs, and are accountable at local, regional and national levels. They need to be actively involved and understand the process. As with all interactions between elements in the health service, there must be collaboration, the ultimate goal being an improvement in patient care.

Audit

 

Audit is the systematic critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources, and the resulting outcome and quality of life for the patient.60

The Department of Health has set out policy documents that outline the development and role of audit in today's healthcare system.60,61 Everyone involved in the healthcare process has a responsibility to conduct audit and to assess the quality of care that they provide. In 1996, Donabedian categorised three important elements in the delivery of healthcare:62

  • Structure – this relates to physical resources available, e.g. the number of theatres, hospital beds and nurses, etc.
  • Process – this refers to the management of the patient, e.g. procedures carried out, drugs used, care delivered, etc.
  • Outcome – this refers to the result of the intervention, e.g. the amount of time off work, incidence of complications, morbidity and length of stay.

Audit is a dynamic cyclical process (an audit loop) in which standards are defined and data are collected against these standards (Fig. 1.3). The results are then analysed and if there are any variances, proposals for change are developed to address the needs. These changes are then implemented and the quality of care reassessed. This closes the audit loop and the procedure begins again. The key to effective audit is that the loop must begin with the development of evidence-based standards. Any success in changing care to meet proposed standards is unlikely to produce more effective clinical care if such standards are set in an arbitrary way. The Royal College of Surgeons of England has published its own guidelines on clinical audit in surgical practice.63

FIGURE 1.3 The audit loop.

One result of the drive to implement audit in the UK was the development in 1993 of a National Confidential Enquiry into Perioperative Deaths (NCEPOD). This is an ongoing national audit and has produced a series of reports and recommendations based upon a peer review process. The process has a high rate of participation and reports with recommendations have resulted in a number of changes in clinical practice. For example, there has been a dramatic reduction in out-of-hours operating following recommendations suggesting that much of this was unsafe and unnecessary.

 

Key points

  • The formulation of clear and answerable questions is a key element of EBM in surgery.
  • Identification of all the relevant evidence is required.
  • Critical appraisal of the evidence is necessary.
  • Synthesis of information from multiple sources needs to provide a clear message through systematic review and/or meta-analysis.
  • Findings should be implemented through mechanisms such as the use of clinical guidelines and ICPs.
  • Monitoring through audit is necessary to ensure continuing adherence to best practice.
  • Regular review to incorporate new evidence or take account of clinical developments should be carried out.
  • Throughout this book and the other volumes in this series, an attempt will be made to take an evidence-based approach. This will include the identification of high-quality evidence from RCTs and systematic reviews, underlining key statements and recommendations relating to clinical practice. All clinicians have a duty to ensure that they act in accordance with current ‘best evidence’ in order to give patients the highest chance of favourable outcomes and make the best use of limited healthcare resources.

References

  1. Sackett, D., Rosenberg, W.M.C., Muirgray, J.A., et al. Evidence based medicine: what it is and what it isn't. Br Med J. 1996;312:71–72.
  2. Kaska, S., Weinstein, J.N., Historical perspective. Ernest Amory Codman, 1869–1940. A pioneer of evidence-based medicine: the end result idea. Spine1998;23:629–633. 9530796
  3. Cochrane Collaboration. The Cochrane Collaboration: preparing, maintaining and disseminating systematic reviews of the effects of health care. Oxford: Cochrane Centre, 1999.
  4. Sackett, D., Richardson, W.S., Rosenberg, W. Evidence-based medicine: how to practice and teach EBM. New York: Churchill Livingstone; 1997.
  5. Smith, R. Where is the wisdom? Br Med J. 1991;303:798–799.
  6. Ellis, J., Mulligan, I., Rowe, J., et al, Inpatient general medicine is evidence based. Lancet1995;346:407–410. 7623571
  7. Department of Health. The new NHS: modern, dependable. London: Department of Health, 1997.
  8. Department of Health. A first class service. London: Department of Health, 1998.
  9. Wilt, T., Brawer, M.K., The Prostate Cancer Intervention Versus Observation Trial: a randomized trial comparing radical prostatectomy versus expectant management for the treatment of clinically localized prostate cancer. J Urol1994;152:1910–1914. 7523736
  10. Majeed, A., Troy, G., Nicholl, J.P., et al, Randomised, prospective, single-blind comparison of laparoscopic versus small-incision cholecystectomy. Lancet1996;347:989–994. 8606612
  11. Anon. Laparoscopic versus open repair of groin hernia: a randomised comparison. The MRC Laparoscopic Groin Hernia Trial Group. Lancet. 1999;354:185–190.
  12. Anon, Beneficial effect of carotid endarterectomy in symptomatic patients with high-grade carotid stenosis. North American Symptomatic Carotid Endarterectomy Trial Collaborators. N Engl J Med1991;325:445–453. 1852179
  13. Hunter, D. Rationing and evidence-based medicine. J Eval Clin Pract. 1996;2:5–8.
  14. Clarke M., Oxman A.D., eds. Cochrane reviewers handbook 4.1.1. Oxford: Cochrane Library, 2000. [(updated December 2000), Issue 4].
  15. Guyatt, G., Sackett, D.L., Sinclair, J.C., et alEvidence-Based Medicine Working Group, Users' guides to the medical literature. IX. A method for grading health care recommendations. for the. JAMA. 1995;274:1800–1804. 7500513
  16. Altman, D. Randomisation. Br Med J. 1991;302:1481–1482.
  17. Schulz, K.F., Altman, D.G., Moher, D., CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. for the. Br Med J. 2010;340:c332.
  18. Moher, D., Hopewell, S., Schulz, K.F., CONSORT Group. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trial. for the.Br Med J. 2010;340:c869.
  19. Elm, E., Altman, D.G., Egger, M., et al. Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies.Br Med J. 2007;335:806–808.
  20. Mober, D., Cook, D.J., Eastwood, S., QUORUM Group, Improving the quality of reports of meta-analyses of randomized controlled trials: the QUORUM statement. for the. Lancet. 1999;354:1896–1900. 10584742
  21. Greenhalgh, T., How to read a paper: the basics of evidence based medicine. vol. xvii. London: British Medical Journal Publications; 1997. [p. 196].
  22. Mulrow, C. Rationale for systematic reviews. Br Med J. 1994;309:597–599.
  23. Lau, J., Schmid, C.H., Chalmers, T.C. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995;48:45–60.
  24. Antman, E., Lau, J., Kupelnick, B., et al, A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA1992;268:240–248. 1535110
  25. Greenhalgh, T. How to read a paper. The MEDLINE database. Br Med J. 1997;315:180–183.
  26. Adams, C., Power, A., Fredrick, K., et al, An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care.Psychol Med1994;24:741–748. 7991756
  27. Easterbrook, P., Berlin, J.A., Gopalan, R., et al, Publication bias in clinical research. Lancet1991;337:867–872. 1672966
  28. Gotzsche, P. Reference bias in reports of drug trials. Br Med J (Clin Res). 1987;295:654–656.
  29. Fullerton-Smith, I. How members of the Cochrane Collaboration prepare and maintain systematic reviews of the effects of health care. Evidence-Based Med. 1995;1:7–8.
  30. Sackett, D., Clinical epidemiology: a basic science for clinical medicine. vol. xviii. 2nd ed. Boston: Little Brown; 1991. [p.441].
  31. Thompson, S., Pocock, S., Can meta-analysis be trusted. Lancet1991;338:1127–1130. 1682553
  32. Anon, Magnesium, myocardial infarction, meta-analysis and mega-trials. Drug Ther Bull1995;33:25–27. 7587989
  33. Chalmers, I., Altman, D.G. Systematic reviews. London: BMJ Publications; 1995. [p.50].
  34. DerSimonian, R., Laird, N. Meta-analysis in clinical trials. Controlled Clin Trials. 1986;7:177–188.
  35. Egger, M., Smith, G.D. Misleading meta-analysis. Br Med J. 1995;310:752–754.
  36. Yusuf, S., Teo, K., Woods, K., Intravenous magnesium in acute myocardial infarction. An effective, safe, simple, and inexpensive intervention. Circulation1993;87:2043–2046.8504519
  37. Anon, ISIS-4: a randomised factorial trial assessing early oral captopril, oral mononitrate, and intravenous magnesium sulphate in 58 050 patients with suspected acute myocardial infarction. ISIS-4 (Fourth International Study of Infarct Survival) Collaborative Group. Lancet1995;345:669–685. 7661937
  38. Field, M.J., Lohr, K.N. Clinical practice guidelines: directions for a new program. Washington, DC: National Academy Press; 1990. [p. 160].
  39. Anon. North of England evidence based guidelines development project: summary version of evidence based guideline for the primary care management in adults. North of England Asthma Guideline Development Group. Br Med J. 1996;312:762–766.
  40. Anon. North of England evidence based guidelines development project: summary version of evidence based guideline for the primary care management angina. North of England Stable Angina Guideline Development Group. Br Med J. 1996;312:827–832.
  41. Eccles, M., Freemantle, N., Mason, J. North of England evidence based guideline development project: guideline on the use of aspirin as secondary prophylaxis for vascular disease in primary care. North of England Aspirin Guideline Development Group. Br Med J. 1998;316:1303–1309.
  42. Grol, R., Dalhuijsen, J., Thomas, S. Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. Br Med J. 1998;317:858–861.
  43. NHS Executive. Clinical guidelines: using clinical guidelines to improve patient care within the NHS. Leeds: NHS Executive, 1996.
  44. Gray, J., Haynes, R.D., Sackett, D.L., et al. Transferring evidence from health care research into medical practice. 3. Developing evidence-based clinical policy. Evid Based Med. 1997;2:36–38.
  45. Grimshaw, J., Russell, I.T., Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet1993;342:1317–1322. 7901634
  46. Cluzeau, F., Littlejohns, P., Grimshaw, J., et al. Appraisal instrument for clinical guidelines. London: St George's Hospital Medical School; 1997.
  47. Oxman, A., Guyatt, G.H. Guidelines for reading literature reviews. Can Med Assoc J. 1988;138:697–703.
  48. Oxman, A., Cook, D.J., Guyatt, G.H., Users' guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA1994;272:1367–1371.7933399
  49. Oxman, A. Checklists for review articles. Br Med J. 1994;309:648–651.
  50. Cook, D., Guyatt, G.H., Laupacis, A., et al. Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest. 1992;102:3055–3115.
  51. Canadian Task Force on the Periodic Health Examination, The periodic health examination. Can Med Assoc J1979;121:1193–1254. 115569
  52. Hadorn, D.C., Baker, D., Hodges, J.S., et al, Rating the quality of evidence for clinical practice guidelines. J Clin Epidemiol1996;49:749–754. 8691224
  53. Centre for Evidence-based Medicine Levels of evidence and grades of recommendations. Oxford: Centre for Evidence-based Medicine; 1999. www.cebm.net/index.aspx?o=1025
  54. NHS Central Research and Development Committee. Methods to promote the implementation of research findings in the NHS: priorities for evaluation: report to the NHS Central Research and Development Committee. London: Department of Health, 1995.
  55. Bero, L., Grilli, R., Grimshaw, J., et al. Closing the gap between research and practice. In: Haines A., Donald A., eds. Getting research findings into practice. London: BMJ Publications; 1998:27–35.
  56. Dunning, M., Abi-aad, G., Gilbert, G., et al. Turning evidence into everyday practice. London: King's Fund; 1999.
  57. Evans, D., Haines, A. Implementing evidence based changes in healthcare. Oxford: Radcliffe Medical Press; 2000.
  58. Dunning, M., McQuay, H., Milne, R., Getting a GRiPP. Health Serv J1994;104:18–20. 10134566
  59. Appleby, J.W.K., Ham, C. Acting on the evidence: a review of clinical effectiveness: sources of information, dissemination, and implementation. Birmingham: National Association of Health Authorities and Trusts; 1995.
  60. Department of Health. Clinical audit: meeting and improving standards in health care. London: Department of Health, 1998. [p. 14].
  61. Department of Health. The evolution of clinical audit. Heywood: Health Publications Unit, 1994.
  62. Donabedian, A. Evaluating the quality of care. Millbank Memorial Federation of Quality. 1996;3:166–203.
  63. Royal College of Surgeons of England. Clinical audit in surgical practice. London: Royal College of Surgeons of England, 1995.

Appendix

Possible Sources Of Further Information, Useful Internet Sites And Contact Addresses

The details below provide references to a number of sources of information, particularly those accessible through the Internet. It must be remembered that there are rapid changes in the material available online and Internet addresses are liable to change. Several of these sources provide extensive links to other sites.

Organisations Specialising In Evidence-Based Practice, Systematic Reviews, Etc

Aggressive Research Intelligence Facility (ARIF)

http://www.arif.bham.ac.uk

BMJ Evidence Centre

http://group.bmj.com/products/evidence-centre

CASP (Critical Appraisal Skills Program)

http://www.casp-uk.net/

Centre for Evidence-based Child Health

http://www.ucl.ac.uk/ich/research-ich/mrc-cech/training/evidence-based-child-health

Centre for Evidence-based Medicine, established in Oxford.

http://www.cebm.net

Centre for Evidence-based Mental Health

http://www.cebmh.com

Centre for Health Evidence, University of Alberta.

http://www.cche.net/

Evidence Network – an initiative of the ESRC UK Centre for Evidence-Based Policy and Practice

http://www.kcl.ac.uk/schools/sspp/interdisciplinary/evidence

JAMA Evidence

http://www.jamaevidence.com/

McMaster University Health Information Research Unit

http://hiru.mcmaster.ca/hiru/

NIHR Health Technology Assessment Programme

http://www.hta.ac.uk/

NHS Centre for Reviews and Dissemination University of York.

http://www.york.ac.uk/inst/crd/

National Institute for Clinical Excellence (NICE)

http://www.nice.org.uk/

Intute: Health and Life Sciences – closed in July 2011 but website still open for next 3 years although will not be updated.

http://www.intute.ac.uk/medicine/

National Institute for Health and Research

http://www.nihr.ac.uk/Pages/default.aspx

NHS Evidence – web-based portal managed by NICE and linked with the National Electronic Library for Health (NeLH). Includes access to My Evidence.

https://www.evidence.nhs.uk/

UK Cochrane Centre

Summertown Pavilion, Middle Way, Oxford OX2 7LG

Tel. 01865 516300

general@cochrane.ac.uk

home page – http://ukcc.cochrane.org/

The Cochrane Collaboration

http://www.cochrane.org/

Internet access to the Cochrane library and databases:

http://www.thecochranelibrary.com/view/0/index.html

Cochrane Central Register of Controlled Trials

http://onlinelibrary.wiley.com/o/cochrane/cochrane_clcentral_articles_fs.html

EPOC – Effective Practice and Organisation of Care

http://epoc.cochrane.org/about-epoc

University of Alberta Evidence Based Practice Centre

http://www.ualberta.ca/ARCHE/epc.htm

Sources Of Reviews And Abstracts Relating To Evidence-Based Practice

ACP Journal Club

http://acpjc.acponline.org/

Bandolier (now an electronic version, independently written by Oxford scientists)

http://www.medicine.ox.ac.uk/bandolier/

BMJ Clinical Evidence – a compendium of evidence for effective health care

http://clinicalevidence.bmj.com/ceweb/index.jsp

Centre for Evidence Based Purchasing

http://nhscep.useconnect.co.uk/Default.aspx

Cochrane Systematic Reviews (abstracts only)

http://www.cochrane.org/cochrane-reviews

Effective Health Care Bulletins

http://www.york.ac.uk/inst/crd/ehcb_em.htm

Evidence Based Nursing Practice

http://www.ebnp.co.uk/index.htm

Evidence Based On-call

http://www.eboncall.org/

PROSPERO – worldwide prospective register of systematic reviews

http://www.crd.york.ac.uk/prospero/

Journals Available On The Internet

eBMJ (electronic version of the British Medical Journal)

http://www.bmj.com

Journal of the American Medical Association (JAMA)

http://jama.ama-assn.org/

Canadian Medical Association Journal (CMAJ)

http://www.cmaj.ca/

Evidence-based Medicine

http://ebm.bmj.com/

Evidence-based Mental Health

http://ebmh.bmj.com/

Evidence-based Nursing

http://ebn.bmj.com/

Databases, Bibliographies And Catalogues

PUBMED (the free version of MEDLINE)

http://www.ncbi.nlm.nih.gov/pubmed/

BestBets – best evidence topics

http://www.bestbets.org

Trip database – turning research into practice

http://www.tripdatabase.com/index.html

BMJ Best Health

http://besthealth.bmj.com/x/index.html

DUETs – The Database of Uncertainties about the Effects of Treatments publishes uncertainties that cannot currently be answered by referring to reliable up-to-date systematic reviews of existing research evidence

http://www.library.nhs.uk/DUETs/Default.aspx

Google Scholar

http://www scholar.google.co.uk

National Research Register Archive – a searchable copy of the archives held by the National Research Register (NRR) Projects Database, up to September 2007

http://www.nihr.ac.uk/Pages/NRRArchive.aspx

National Institute for Health Research (NIHR) Clinical Network Research Portfolio is a database of clinical research studies that it supports, undertaken in the NHS

http://public.ukcrn.org.uk/search/

Sources Of Guidelines And Integrated Care Pathways

AHRQ (Agency for Healthcare Research and Quality) – provides practical healthcare information, research findings and data to help consumers

http://www.ahcpr.gov/

Evidence Based Practice Centres – developed in conjunction with the AHRQ

http://www.ahcpr.gov/clinic/epc/

Cedars – Sinai Medical Center, Health Services Research

Home page: http://www.csmc.edu/

National Guideline Clearinghouse

http://www.guideline.gov/

NICE Pathways

http://pathways.nice.org.uk/

Scottish Intercollegiate Guidelines Network (SIGN)

http://www.sign.ac.uk

Scottish Pathways Association

http://www.icpus.org.uk/

Towards Optimised Practice (TOP) Clinical Guidelines

http://www.topalbertadoctors.org/cpgs.php

Useful Texts

Cochrane Collaboration Handbook

http://www.cochrane-handbook.org/


Previous
Page
Next
Page