CC BY-NC-ND 4.0 · Indian Journal of Neurosurgery 2019; 08(01): 001-005
DOI: 10.1055/s-0039-1688729
Editorial
Neurological Surgeons' Society of India

Evidence-Based Medicine in Neurosurgery—Other Side of the Coin?

Abrar Ahad Wani
1   Department of Neurosurgery, Sher-i-Kashmir Institute of Medical Sciences, Soura, Srinagar, J&K, India
,
Arif Hussain Sarmast
1   Department of Neurosurgery, Sher-i-Kashmir Institute of Medical Sciences, Soura, Srinagar, J&K, India
› Author Affiliations
Further Information

Address for correspondence

Abrar Ahad Wani, MCh
Department of Neurosurgery, Sher-i-Kashmir Institute of Medical Sciences
Soura, Srinagar, 190011, J&K
India   

Publication History

Publication Date:
30 April 2019 (online)

 

The concept evidence-based medicine (EBM), although in vogue since long, has been gradually making its space in the neurosurgical practice for the past four decades. The proponents propose it as a new paradigm of health care on which every treatment modality must be based, and on the other extreme, there is skepticism in EBM taking a significant role in management of neurosurgical ailments. The debate will vary from deception to the final truth. Nevertheless, this concept will evolve with days to come, but it needs to be understood in all aspects, including its pros and cons with a need to improve it in many aspects. In this article, we focus on unique problems of universal application of this concept in neurosurgery.

Evidence-based medicine is commonly defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”[1] The term is loosely used and can refer to anything from conducting a statistical meta-analysis of accumulated research, promoting randomized clinical trials (RCTs), supporting uniform reporting styles for research, and formulating a personal orientation toward critical self-evaluation. EBM was initially defined in opposition to clinical experience, but later definitions have emphasized its complementary character and have aimed to improve clinical experience with better evidence. It was in the late 1970s when a group of researchers in Canada's McMaster University authored a group of manuscripts on how to critically appraise scientific information and the term “evidence-based medicine” made first appearance in 1990 at the same university. The term subsequently appeared in print in the American College of Physicians (ACP) journal club in 1991.[2] In contrast to EBM, comparative effectiveness research (CER) is defined as “The generation and synthesis of evidence that compares the benefits and harms of alternate methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care.”[3] All definitions of EBM involve three overlapping processes: systemic review of the available scientific studies, integration of such data with clinical experience, and patient participation in decision making.[4] [5] One common implementation of EBM involves the use of clinical practice guidelines during medical decision making to encourage effective care. The Institute of Medicine (IOM) defines clinical guidelines as “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.”[3]

It is difficult to exaggerate the resonance of EBM in contemporary health care. Many observers have elevated EBM to a new international health care “paradigm.”[6] Many parties have jumped into this subject, and many clinical practice guidelines are being framed by individuals, professional organizations, insurers, and others that the benefits of uniformity may disappear when there are so many overlapping, conflicting, and poorly constructed guidelines. With more than 1,000 guidelines created annually, calls for “guidelines for clinical guidelines” have been issued.[7] [8] This is perhaps what Arthur Doyle in his work stated “There is nothing more deceptive than an obvious fact.” The work on EBM was motivated, in part, as a response to the accusations made by Archibald Cochrane in his book, Effectiveness and Efficiency, which Hill describes as a “a biting scientific critique of medical practice.” In it, Cochrane accuses that many of the treatments, interventions, tests, and procedures used in medicine had no evidence to demonstrate their effectiveness and may, in fact, be doing more harm than good.[9] Cochrane promoted the use of RCTs as the best means of demonstrating the efficacy of a therapy or an intervention, as well as the concept of “efficient health care,” that is, using the available health care resources to “maximize the delivery of effective interventions.”[10] A large group of researchers based in Canada and the United States formed the first international EBM working group and published “The User's Guide to the Medical Literature,” in JAMA between 1993 and 2000, as a 25-part series that still resonates today. These papers were later turned into a textbook on EBM.[2] [10] At the same time, when there was tremendous change in EBM, a need was felt for the applicability of the same to the neurologic surgery.[11]

Evidence-based medicine is informed by hierarchical evidence, and this hierarchy informs clinical decision making. The descending order of evidentiary weight is (1) systemic reviews of multiple high-quality randomized trials, (2) a single high-quality randomized trial, (3) systemic review of observational studies addressing patient important outcome, (4) single observational studies addressing patient important outcome; (5) physiologic studies, and (6) unsystematic clinical observations. It is important to recognize that if treatment effects are sufficiently large and consistent, observational studies may prove compelling evidence than RCTs, particularly in situation where RCTs are not feasible.[10]

Rise of Conflict over Evidence-Based Medicine

Right from its inception, over a hundred books and thousands of articles have been published applying, evaluating, debating, criticizing, and supporting EBM. The polarization over EBM is the most current manifestation of a classic debate over the “soul” of medicine: Is medicine a science or an art?[12] Supporters claim that the promised benefits of EBM are self-evident: It ties clinical practices to scientific standards of evidence, thereby providing a means of measuring the efficacy of those practices. Instead of relying solely on accumulated personal experiences to determine which clinical techniques are most effective, individual clinicians using EBM will be able to draw upon the objective experience of many researchers working with accepted scientific standards of evidence and relate this evidence to an assessment of the patient's circumstances and the practitioner's clinical experience. Improved efficacy should also promote greater efficiency by allowing physicians and hospitals to filter scarce resources away from ineffective clinical practices and toward practices whose effectiveness has been conclusively shown.

Evidence-based medicine should also promote greater uniformity by limiting idiosyncrasies in particular clinical procedures or in the rate at which procedures are performed. In addition, EBM promises to create better-informed patients and clinicians by offering collectively agreed-upon and publicly available information about treatment options. Guidelines also provide a strategic advantage by empowering clinicians to counter managerial decisions to alter their practices that may not be in patients’ best interest. More likely, however, managed care companies may regard clinical practice guidelines as tools to evaluate care and implement cost-cutting measures. Finally, EBM should provide a scientific basis for the construction of public policy. Instead of relying on the opinions of interested parties, policymakers and insurers will be able to supplement these perspectives with objective evidence.[13] Perhaps Leonardo da Vinci presented something like this in his quote “the greatest deception men suffer is from their own opinions.”


#

Criticisms and Limitations of Evidence-Based Medicine vis a vis Neurosurgery

Evidence-based medicine is based on empiricism, misunderstands or misrepresents the philosophy of science, and is a poor philosophic basis for medicine.[14] [15] [16] Originally, its supporters declared it “a new paradigm, in which evidence from health care research is the best basis for decisions for individual patients and health systems.”[17] EBM elevates experimental evidence to primary importance over other forms of evidence, and this is intended to serve as the new basis for clinical thinking. In traditional medical teaching, understanding of basic pathophysiologic mechanisms of disease coupled with clinical experience is of primary importance. The primary criticism is rooted in the idea that EBM is an approach founded on evidence provided by experimental studies designed to minimize bias, rather than on physiologic theory.[18] The belief that scientific observations can be made independent of the biases of the observer is one of the aspects of the philosophy of science known as empiricism; the empirical view holds that medical observations can be made independent of pathophysiological theory. In contrast, one of the basic principles of qualitative research assumes that all observers are biased, and therefore it requires that the viewpoint and biases of the observer be made explicit.[19] For these reasons, some critics have called EBM both unscientific and antiscientific.[15] [20] Various criticisms of EBM center on three main points: (1) reliability of RCTs and meta-analysis when compared with other good research methods (2) questions that EBM can answer are limited, and (3) failure to integrate other, nonstatistical forms of medical information, such as professional experience and patient-specific factors. Studies have failed to show that RCTs and meta-analysis are consistently better than good-quality research using other methods for determining clinical effectiveness. This has been demonstrated in several ways, for example similarly designed RCTs researching the same question frequently disagree with each other. [1] Furthermore, good-quality cohort studies more often than not agree with the findings of RCT studies, demonstrating that high confidence can be placed in study designs besides the RCT.[1] [21] Upshur et al describe a taxonomy that includes four types of evidence: qualitative-personal, qualitative-general, quantitative-personal, and quantitative-general.[22] Of these four categories, EBM only specifically deals with the quantitative-general form of evidence[15]; thus, the criticism arises that “evidence,” as currently defined by EBM, can only answer the questions for which is it suited.[23]

As a general principle, a powerful RCT is our best standard for evaluating the inherent bias and weight to be applied to a given piece of evidence,[24] but it does not follow that EBM requires only RCTs justify clinical practice. Certain clinical problems could not be easily investigated in RCTs—such as those that require extended time intervals for diagnosis and treatment (e.g., the best treatment for low-grade glioma) and those that would result in the unethical treatment of patients.[25] Surgical RCTs are inherently difficult to perform because of ethical and funding considerations, difficulties using sham controls, problems with patient accrual (particularly where there is a small sample size), and preferences and variability in surgical proficiency and techniques.[26] [27] Some surgeons may wish to avoid RCTs to avoid the risk of having their innovative procedures deemed ineffective.[28] Such an attitude clearly represents a conflict of interest between the patient and surgeon.

Evidence-based medicine is not evidence based; that is, it does not meet its own empirical tests for efficacy.[17] [29] [30] There is no convincing evidence that physicians practicing EBM provide better health care than those who do not.[17] EBM advocates might argue that because EBM is not a test, a therapy, or an intervention, it does not require the same level of evidence for support. This argument is misleading in that the tremendous resources required to support and practice EBM are ignored.[15] According to the principles of EBM, compelling evidence should be provided before the expenditure of these resources. Instead, EBM demands and consumes health care resources with no evidence to support the expenditure.[15]

The usefulness of applying EBM to individual patients is limited.[23] [31] [32] Outcome assessments are probabilistic; they do not guarantee what might be efficacious in individual cases. For each patient, a neurosurgeon should therefore exercise his/her clinical judgment, fully inform the patient and their family of all the treatment options, and honor particular patient values. Neurosurgeons might thus feel frustrated at the thought of being forced to apply generalized research findings to individual patients when their clinical expertise tells them they should be doing otherwise. Many of the neurosurgical guidelines are treatment options rather than standards. The clinician should scrutinize guidelines and be confident with his/her procedural design and expected outcomes before applying them to any patients. The clinician should compare the individual patient with the class of patients considered in the studies being referred to for supporting evidence of a clinical decision or a particular guideline. The clinician should also view the evidence in the context of a locally appropriate holistic model of health care, which takes into account cultural, religious, geographical, social/resource/economy-related, and medicolegal factors in determining the applicability of the implementation of EBM in a particular instance.[6]

Owing to small patient populations, uncommon diseases are hard to study with EBM methods.[31] EBM threatens the autonomy of the physician-patient relationship.[9] [15] [31] [33] This may result in limiting the patient's right to choose what is best in his/her individual circumstances. Sackett addresses the “fear that Evidence-Based Medicine will be hijacked by purchasers and managers to cut the costs of health care” by simply stating that this fear is a fundamental misunderstanding of the financial consequences of EBM, and that physicians practicing EBM “may raise rather than lower the cost” of their patient care.[1] The strongest EBM opponents think that EBM is particularly susceptible to hijacking by organizational cost containers.[33] Charlton and Miles state that “EBM involves a takeover of the clinical consultation by an alliance of managers and their statistical technocrats … easily regulated by politicians, bureaucrats and their statistical technicians.”[15] Logically, EBM could be used to limit the application of health care resources to the situations in which there is “high-quality evidence” of efficacy. As has already been shown, there are many patients and many situations for which this evidence will not be available anytime in the foreseeable future. The lack of evidence may be used as a cost-cutting tool to deny patient's treatment for conditions where there is nothing “proven” effective, even though accepting an unproven treatment may be what the patient decides is the most attractive option.

The attitudes of clinicians toward EBM require more research,[17] [34] [35] as well as methods to overcome their skepticism.[34] [36] [37] [38] [39] In considering the relevance of a particular source of evidence, it is imperative that neurosurgery trainees be provided with guidance on adopting a sound methodology that helps avoid errors caused by deliberate and nondeliberate distortions of fact. These include unscrupulous financial stakeholders who attempt to seduce clinicians into believing an unproven and ineffective treatment is efficacious, politically biased, or financially motivated publication companies, or simple, unintentional clinician bias that has found its way into the results of a study through bad design.[40] In one of the studies that was an RCT comparing arthroplasty with cervical fusion, the authors claimed an earlier return to work for the arthroplasty group. The difference reached statistical significance. When asked, the presenter explained the difference: patients who underwent fusion were prescribed a collar. With a collar, they did not or were not allowed to work.[41] This may be summarized by a famous quote of M. K. Gandhi “An error does not become a truth by reason of multiplied propagation nor does truth become an error because nobody sees it.” Recently a growing awareness of the poor quality of reporting in medical research literature has eme rged.[42] [43] [44] Selective reporting of data, incomplete listing of interventions, problematic conclusions, and unclear methodologies have plagued many papers. In neurosurgery, these deficiencies are particularly profound. Despite the well-known preeminence of RCTs,[22] these are scarce in the neurosurgery literature even when compared with general surgery or other surgical subspecialties.[35] [36] [37] Moreover, under close examination, neurosurgical RCTs as a group have shown many flaws. In a survey of 108 RCTs on neurosurgery procedures during a 36-year span, underpowered trials and inadequate design reporting are widespread.[38]

It is estimated that less than 1% of published papers in leading neurosurgical journals are RCTs. Although there are many barriers to performing high-quality RCTs in surgery, one of the most common—and difficult to overcome—is lack of equipoise. This term means “genuine uncertainty within the expert medical community” on the optimal approach for a certain medical condition. RCTs are ethical and feasible only when there is clinical equipoise between the treatment arms of a trial. Lack of clinical equipoise affected the National Institutes of Health–sponsored SPORT (Spine Patient Outcomes Research Trial) study, which contained an RCT that compared surgery versus conservative management for symptomatic lumbar disc herniation. The high crossover rate (30% of patients crossed from the nonoperative cohort to the operative cohort within 3 months) suggested that clinicians, patients, or both felt that surgery provided a higher chance of clinical benefit after 6 weeks of failed conservative management. Conversely, almost as many patients randomized to receive surgery did not undergo an operation, indicating that patients had strong opinions favoring the role of conservative treatment when symptoms were mild or improving. In retrospect, the lack of clinical equipoise limited the ability of this study to detect better outcomes from surgery.

Other major challenge in performing RCTs in neurosurgery are the heterogeneity of the disease entity; that is, a disease entity that is subjected to scrutiny is not similar in all patients like an anterior communicating artery aneurysm may not be similar in all two cases that we have randomized to two arms of treatment, so results will not be correct.

A further challenge in studying novel neurosurgical procedures is the learning curve characteristics, both of individual practitioners and new technologies in a broader sense. The rapid changes in endovascular techniques are a good example of latter.[45] [46] [47] [48] One of the criticisms of International Subarachnoid Aneurysm Trial (ISAT) was that the learning curve of surgeons was not taken into consideration whereas it was considered for interventionists. Another major concern is misinterpretation in many trials. Multiple examples in this regard can be given to prove the fallacies and shortcomings of various studies and trials in neurosurgery that came up with a big hype, however, on close examination that proved to be misleading or at least not so useful, for example the National Acute Spinal Cord Injury Study (NASCIS) that studied the potential benefit of methylprednisolone (MP) administration in spinal cord injury. NASCIS II was designed as a randomized, controlled, double-blind clinical study to generate class I medical evidence on the efficacy of MP and naloxone in the treatment of acute spinal cord injury. However, when we analyze it critically, the strength of medical evidence generated is weakened by omission of data from publication, the arbitrary assignment of an 8-hour therapeutic window, the inconsistency of reported benefit, and the absence of functional outcome measures. The primary positive finding of a 5-point improvement in motor score associated with MP administration compared with placebo control was discovered only in a post hoc analysis of a partial dataset, constituting a retrospective analysis. Accordingly, the beneficial results of NASICS II are downgraded to class III medical evidence. NASCIS II allows for 78 potential discrete post hoc subgroup analyses based on time of administration. By chance, 1 in 20 of these would be expected to be statistically significant at a p value of 0.05. Furthermore, the NASCIS II statistical analysis includes more than 60 t-tests for comparing neurologic outcomes. There are no corrections for multiple comparisons, and no analysis of variance or multivariate statistical techniques were used. Additionally, much of the data are thought to be nonparametric, and hence the t-test is not appropriate. Some statisticians opine that it is unlikely that any statistical significance would be observed if correct statistical methodology was used.[49] [50] [51]

This paper does not undermine the importance of EBM that has a pivotal role in developing health care guidelines. EBM is a good concept but is not so holy that it cannot be debated or improved. We have to understand that the primary aim of a physician is to help the patient achieve health by the best possible management and that can be understood only when we are aware of the inherent weaknesses in the trials and their interpretation. On research level, we need to have better designed RCT and clinical trial registries to improve quality of research data. Neurosurgery training should include a sound knowledge of the principles of EBM, and the organizations must develop policies and guidelines after analyzing the available evidence and its applicability in the local social, cultural, and economic surroundings. This is especially important in developing countries as most of the guidelines come from developed areas that may not be possible to adhere to in our circumstances in many cases.

Financial Support and Sponsorship

None.


#
#
#

Conflicts of Interest

There are no conflicts of interest.

Acknowledgment

None.

  • References

  • 1 Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996; 312 7023 71-72
  • 2 Sackett DL, Straus SE, Richardson WS, Rosenberg HaynesRB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed.. New York, NY: Churchill Livingstone; 2000
  • 3 Manchikanti L, Falco FJ, Singh V. et al. An update of comprehensive evidence-based guidelines for interventional techniques in chronic spinal pain. Part I: introduction and general considerations. Pain Physician 2013; 16 (02) Suppl S1-S48
  • 4 Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press; 2011
  • 5 Field MJ, Lohr KN. Committee to Advise the Public Health Service on Clinical Practice Guidelines, Institute of Medicine: Clinical practice guidelines: directions for a new program. Washington, DC: National Academies Press; 1990
  • 6 Bandopadhayay P, Goldschlager T, Rosenfeld JV. The role of evidence-based medicine in neurosurgery. J Clin Neurosci 2008; 15 (04) 373-378
  • 7 Jackson R, Feder G. Guidelines for clinical guidelines. BMJ 1998; 317 7156 427-428
  • 8 Rosser WW, Davis D, Gilbart E. Guideline Advisory Committee. Assessing guidelines for use in family practice. J Fam Pract 2001; 50 (11) 969-973
  • 9 Hill GB. Archie Cochrane and his legacy. An internal challenge to physicians’ autonomy?. J Clin Epidemiol 2000; 53 (12) 1189-1192
  • 10 Guyatt G, Sinclair J, Cook D, Jaeschke R, Schünemann H, Pauker S. Moving from evidence to action. In: Guyatt G, Rennie D. eds. Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago, IL: American Medical Association; 2002: 599-608
  • 11 Haines SJ. Evidence-based neurosurgery. Neurosurgery 2003; 52 (01) 36-47 discussion 47
  • 12 Berg M. Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. Cambridge, MA: The MIT Press; 1997
  • 13 Timmermans S, Mauck A. The promises and pitfalls of evidence-based medicine. Health Aff (Millwood) 2005; 24 (01) 18-28
  • 14 Cohen AM, Stavri PZ, Hersh WR. A categorization and analysis of the criticisms of evidence-based medicine. Int J Med Inform 2004; 73 (01) 35-43
  • 15 Charlton BG, Miles A. The rise and fall of EBM. QJM 1998; 91 (05) 371-374
  • 16 Robaina-Padrón FJ. [Controversies about instrumented surgery and pain relief in degenerative lumbar spine pain. Results of scientific evidence]. Neurocirugia (Astur) 2007; 18 (05) 406-413
  • 17 Haynes RB. What kind of evidence is it that Evidence-Based Medicine advocates want health care providers and consumers to pay attention to?. BMC Health Serv Res 2002; 2 (01) 3
  • 18 Harari E. Whose evidence?. Lessons from the philosophy of science and the epistemology of medicine. Aust N Z J Psychiatry 2001; 35 (06) 724-730
  • 19 Berg BL. Qualitative Research Methods for the Social Sciences. 4th ed.. Boston, MA: Allyn and Bacon; 2001: 6-11
  • 20 Miles A, Bentley P, Polychronis A, Grey J, Melchiorri C. Recent developments in the evidence-based healthcare debate. J Eval Clin Pract 2001; 7 (02) 85-89
  • 21 Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. Am J Ophthalmol 2000; 130 (05) 688
  • 22 Upshur RE, VanDenKerkhof EG, Goel V. Meaning and measurement: an inclusive model of evidence in health care. J Eval Clin Pract 2001; 7 (02) 91-96
  • 23 Jones GW, Sagar SM. Evidence based medicine. No guidance is provided for situations for which evidence is lacking. BMJ 1995; 311 6999 258 author reply 259
  • 24 Last J, Spasoff R, Harris S. A Dictionary of Epidemiology. 4th ed.. New York, NY: Oxford University Press; 1995
  • 25 Haines SJ, Walters BC. Evidence Based Neurosurgery. New York, NY: Thieme Medical Publishers; 2006
  • 26 Abraham NS. Will the dilemma of evidence-based surgery ever be resolved?. ANZ J Surg 2006; 76 (09) 855-860
  • 27 Byer A. The practical and ethical defects of surgical randomised prospective trials. J Med Ethics 1983; 9 (02) 90-93
  • 28 Michel LA, Johnson P. Is surgical mystique a myth and double standard the reality?. Med Humanit 2002; 28 (02) 66-70
  • 29 Singer PA. Resource allocation: beyond evidence-based medicine and cost-effectiveness analysis. ACP J Club 1997; 127 (03) A16-A18 [editorial]
  • 30 Sehon SR, Stanley DE. A philosophical analysis of the evidence-based medicine debate. BMC Health Serv Res 2003; 3 (01) 14
  • 31 Kenny NP. Does good science make good medicine?. Incorporating evidence into practice is complicated by the fact that clinical practice is as much art as science. CMAJ 1997; 157 (01) 33-36
  • 32 Naylor CD. Grey zones of clinical practice: some limits to evidence-based medicine. Lancet 1995; 345 8953 840-842
  • 33 Grahame-Smith D. Evidence based medicine: Socratic dissent. BMJ 1995; 310 6987 1126-1127
  • 34 Goodman K. Ethics and Evidence-Based Medicine: Fallibility and Responsibility in Clinical Science. Cambridge, United Kingdom: Cambridge University Press; 2003
  • 35 Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice?. The effect of a consensus statement on the practice of physicians. N Engl J Med 1989; 321 (19) 1306-1311
  • 36 Adams AS, Soumerai SB, Lomas J, Ross-Degnan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999; 11 (03) 187-192
  • 37 Freemantle N, Eccles M, Wood J. et al. A randomized trial of Evidence-Based OutReach (EBOR): rationale and design. Control Clin Trials 1999; 20 (05) 479-492
  • 38 Gifford DR, Holloway RG, Frankel MR. et al. Improving adherence to dementia guidelines through education and opinion leaders. A randomized, controlled trial. Ann Intern Med 1999; 131 (04) 237-246
  • 39 Stross JK. Guidelines have their limits. Ann Intern Med 1999; 131 (04) 304-306
  • 40 Bhandari M, Montori V, Devereaux PJ, Dosanjh S, Sprague S, Guyatt GH. Challenges to the practice of evidence-based medicine during residents’ surgical training: a qualitative study using grounded theory. Acad Med 2003; 78 (11) 1183-1190
  • 41 Fessler RG, Papadopoulos S, Anderson P, Heller J, Sasso R. Comparison of Bryan cervical disc arthroplasty with anterior cervical decompression and fusion: Clinical and radiographic results of a randomized controlled clinical trial. Presented at 24th Annual Meeting of the AANS/CNS Section on Disorders of the Spine and Peripheral Nerves. Orlando: February 27–March 1 2008
  • 42 Albin RL. Sham surgery controls: intracerebral grafting of fetal tissue for Parkinson's disease and proposed criteria for use of sham surgery controls. J Med Ethics 2002; 28 (05) 322-325
  • 43 Albin RL. Sham surgery controls are mitigated trolleys. J Med Ethics 2005; 31 (03) 149-152
  • 44 Dekkers W, Boer G. Sham neurosurgery in patients with Parkinson's disease: is it morally acceptable?. J Med Ethics 2001; 27 (03) 151-156
  • 45 Gnanalingham KK, Tysome J, Martinez-Canca J, Barazi SA. Quality of clinical studies in neurosurgical journals: signs of improvement over three decades. J Neurosurg 2005; 103 (03) 439-443
  • 46 Freedman B. Equipoise and the ethics of clinical research. N Engl J Med 1987; 317 (03) 141-145
  • 47 Weinstein JN, Lurie JD, Tosteson TD. et al. Surgical vs nonoperative treatment for lumbar disk herniation: the Spine Patient Outcomes Research Trial (SPORT) observational cohort. JAMA 2006; 296 (20) 2451-2459
  • 48 Ghogawala Z, Krishnaney AA, Steinmetz MP, Batjer HH, Benzel EC. The Evidence of Neurosurgery. New Delhi, India: Jaypee Brothers; 2013: 1-10
  • 49 Bracken MB, Shepard MJ, Hellenbrand KG. et al. Methylprednisolone and neurological function 1 year after spinal cord injury. Results of the National Acute Spinal Cord Injury Study. J Neurosurg 1985; 63 (05) 704-713
  • 50 Petitjean ME, Pointillart V, Dixmerias F. et al. [Medical treatment of spinal cord injury in the acute stage]. Ann Fr Anesth Reanim 1998; 17 (02) 114-122
  • 51 Pointillart V, Petitjean ME, Wiart L. et al. Pharmacological therapy of spinal cord injury during the acute phase. Spinal Cord 2000; 38 (02) 71-76

Address for correspondence

Abrar Ahad Wani, MCh
Department of Neurosurgery, Sher-i-Kashmir Institute of Medical Sciences
Soura, Srinagar, 190011, J&K
India   

  • References

  • 1 Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996; 312 7023 71-72
  • 2 Sackett DL, Straus SE, Richardson WS, Rosenberg HaynesRB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed.. New York, NY: Churchill Livingstone; 2000
  • 3 Manchikanti L, Falco FJ, Singh V. et al. An update of comprehensive evidence-based guidelines for interventional techniques in chronic spinal pain. Part I: introduction and general considerations. Pain Physician 2013; 16 (02) Suppl S1-S48
  • 4 Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press; 2011
  • 5 Field MJ, Lohr KN. Committee to Advise the Public Health Service on Clinical Practice Guidelines, Institute of Medicine: Clinical practice guidelines: directions for a new program. Washington, DC: National Academies Press; 1990
  • 6 Bandopadhayay P, Goldschlager T, Rosenfeld JV. The role of evidence-based medicine in neurosurgery. J Clin Neurosci 2008; 15 (04) 373-378
  • 7 Jackson R, Feder G. Guidelines for clinical guidelines. BMJ 1998; 317 7156 427-428
  • 8 Rosser WW, Davis D, Gilbart E. Guideline Advisory Committee. Assessing guidelines for use in family practice. J Fam Pract 2001; 50 (11) 969-973
  • 9 Hill GB. Archie Cochrane and his legacy. An internal challenge to physicians’ autonomy?. J Clin Epidemiol 2000; 53 (12) 1189-1192
  • 10 Guyatt G, Sinclair J, Cook D, Jaeschke R, Schünemann H, Pauker S. Moving from evidence to action. In: Guyatt G, Rennie D. eds. Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago, IL: American Medical Association; 2002: 599-608
  • 11 Haines SJ. Evidence-based neurosurgery. Neurosurgery 2003; 52 (01) 36-47 discussion 47
  • 12 Berg M. Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. Cambridge, MA: The MIT Press; 1997
  • 13 Timmermans S, Mauck A. The promises and pitfalls of evidence-based medicine. Health Aff (Millwood) 2005; 24 (01) 18-28
  • 14 Cohen AM, Stavri PZ, Hersh WR. A categorization and analysis of the criticisms of evidence-based medicine. Int J Med Inform 2004; 73 (01) 35-43
  • 15 Charlton BG, Miles A. The rise and fall of EBM. QJM 1998; 91 (05) 371-374
  • 16 Robaina-Padrón FJ. [Controversies about instrumented surgery and pain relief in degenerative lumbar spine pain. Results of scientific evidence]. Neurocirugia (Astur) 2007; 18 (05) 406-413
  • 17 Haynes RB. What kind of evidence is it that Evidence-Based Medicine advocates want health care providers and consumers to pay attention to?. BMC Health Serv Res 2002; 2 (01) 3
  • 18 Harari E. Whose evidence?. Lessons from the philosophy of science and the epistemology of medicine. Aust N Z J Psychiatry 2001; 35 (06) 724-730
  • 19 Berg BL. Qualitative Research Methods for the Social Sciences. 4th ed.. Boston, MA: Allyn and Bacon; 2001: 6-11
  • 20 Miles A, Bentley P, Polychronis A, Grey J, Melchiorri C. Recent developments in the evidence-based healthcare debate. J Eval Clin Pract 2001; 7 (02) 85-89
  • 21 Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. Am J Ophthalmol 2000; 130 (05) 688
  • 22 Upshur RE, VanDenKerkhof EG, Goel V. Meaning and measurement: an inclusive model of evidence in health care. J Eval Clin Pract 2001; 7 (02) 91-96
  • 23 Jones GW, Sagar SM. Evidence based medicine. No guidance is provided for situations for which evidence is lacking. BMJ 1995; 311 6999 258 author reply 259
  • 24 Last J, Spasoff R, Harris S. A Dictionary of Epidemiology. 4th ed.. New York, NY: Oxford University Press; 1995
  • 25 Haines SJ, Walters BC. Evidence Based Neurosurgery. New York, NY: Thieme Medical Publishers; 2006
  • 26 Abraham NS. Will the dilemma of evidence-based surgery ever be resolved?. ANZ J Surg 2006; 76 (09) 855-860
  • 27 Byer A. The practical and ethical defects of surgical randomised prospective trials. J Med Ethics 1983; 9 (02) 90-93
  • 28 Michel LA, Johnson P. Is surgical mystique a myth and double standard the reality?. Med Humanit 2002; 28 (02) 66-70
  • 29 Singer PA. Resource allocation: beyond evidence-based medicine and cost-effectiveness analysis. ACP J Club 1997; 127 (03) A16-A18 [editorial]
  • 30 Sehon SR, Stanley DE. A philosophical analysis of the evidence-based medicine debate. BMC Health Serv Res 2003; 3 (01) 14
  • 31 Kenny NP. Does good science make good medicine?. Incorporating evidence into practice is complicated by the fact that clinical practice is as much art as science. CMAJ 1997; 157 (01) 33-36
  • 32 Naylor CD. Grey zones of clinical practice: some limits to evidence-based medicine. Lancet 1995; 345 8953 840-842
  • 33 Grahame-Smith D. Evidence based medicine: Socratic dissent. BMJ 1995; 310 6987 1126-1127
  • 34 Goodman K. Ethics and Evidence-Based Medicine: Fallibility and Responsibility in Clinical Science. Cambridge, United Kingdom: Cambridge University Press; 2003
  • 35 Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice?. The effect of a consensus statement on the practice of physicians. N Engl J Med 1989; 321 (19) 1306-1311
  • 36 Adams AS, Soumerai SB, Lomas J, Ross-Degnan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999; 11 (03) 187-192
  • 37 Freemantle N, Eccles M, Wood J. et al. A randomized trial of Evidence-Based OutReach (EBOR): rationale and design. Control Clin Trials 1999; 20 (05) 479-492
  • 38 Gifford DR, Holloway RG, Frankel MR. et al. Improving adherence to dementia guidelines through education and opinion leaders. A randomized, controlled trial. Ann Intern Med 1999; 131 (04) 237-246
  • 39 Stross JK. Guidelines have their limits. Ann Intern Med 1999; 131 (04) 304-306
  • 40 Bhandari M, Montori V, Devereaux PJ, Dosanjh S, Sprague S, Guyatt GH. Challenges to the practice of evidence-based medicine during residents’ surgical training: a qualitative study using grounded theory. Acad Med 2003; 78 (11) 1183-1190
  • 41 Fessler RG, Papadopoulos S, Anderson P, Heller J, Sasso R. Comparison of Bryan cervical disc arthroplasty with anterior cervical decompression and fusion: Clinical and radiographic results of a randomized controlled clinical trial. Presented at 24th Annual Meeting of the AANS/CNS Section on Disorders of the Spine and Peripheral Nerves. Orlando: February 27–March 1 2008
  • 42 Albin RL. Sham surgery controls: intracerebral grafting of fetal tissue for Parkinson's disease and proposed criteria for use of sham surgery controls. J Med Ethics 2002; 28 (05) 322-325
  • 43 Albin RL. Sham surgery controls are mitigated trolleys. J Med Ethics 2005; 31 (03) 149-152
  • 44 Dekkers W, Boer G. Sham neurosurgery in patients with Parkinson's disease: is it morally acceptable?. J Med Ethics 2001; 27 (03) 151-156
  • 45 Gnanalingham KK, Tysome J, Martinez-Canca J, Barazi SA. Quality of clinical studies in neurosurgical journals: signs of improvement over three decades. J Neurosurg 2005; 103 (03) 439-443
  • 46 Freedman B. Equipoise and the ethics of clinical research. N Engl J Med 1987; 317 (03) 141-145
  • 47 Weinstein JN, Lurie JD, Tosteson TD. et al. Surgical vs nonoperative treatment for lumbar disk herniation: the Spine Patient Outcomes Research Trial (SPORT) observational cohort. JAMA 2006; 296 (20) 2451-2459
  • 48 Ghogawala Z, Krishnaney AA, Steinmetz MP, Batjer HH, Benzel EC. The Evidence of Neurosurgery. New Delhi, India: Jaypee Brothers; 2013: 1-10
  • 49 Bracken MB, Shepard MJ, Hellenbrand KG. et al. Methylprednisolone and neurological function 1 year after spinal cord injury. Results of the National Acute Spinal Cord Injury Study. J Neurosurg 1985; 63 (05) 704-713
  • 50 Petitjean ME, Pointillart V, Dixmerias F. et al. [Medical treatment of spinal cord injury in the acute stage]. Ann Fr Anesth Reanim 1998; 17 (02) 114-122
  • 51 Pointillart V, Petitjean ME, Wiart L. et al. Pharmacological therapy of spinal cord injury during the acute phase. Spinal Cord 2000; 38 (02) 71-76