ϟ

Barnaby C Reeves

Here are all the papers by Barnaby C Reeves that you can download and read on OA.mg.
Barnaby C Reeves’s last known institution is . Download Barnaby C Reeves PDFs here.

Claim this Profile →
DOI: 10.1161/circulationaha.107.698977
2007
Cited 1,208 times
Increased Mortality, Postoperative Morbidity, and Cost After Red Blood Cell Transfusion in Patients Having Cardiac Surgery
Background— Red blood cell transfusion can both benefit and harm. To inform decisions about transfusion, we aimed to quantify associations of transfusion with clinical outcomes and cost in patients having cardiac surgery. Methods and Results— Clinical, hematology, and blood transfusion databases were linked with the UK population register. Additional hematocrit information was obtained from intensive care unit charts. Composite infection (respiratory or wound infection or septicemia) and ischemic outcomes (myocardial infarction, stroke, renal impairment, or failure) were prespecified as coprimary end points. Secondary outcomes were resource use, cost, and survival. Associations were estimated by regression modeling with adjustment for potential confounding. All adult patients having cardiac surgery between April 1, 1996, and December 31, 2003, with key exposure and outcome data were included (98%). Adjusted odds ratios for composite infection (737 of 8516) and ischemic outcomes (832 of 8518) for transfused versus nontransfused patients were 3.38 (95% confidence interval [CI], 2.60 to 4.40) and 3.35 (95% CI, 2.68 to 4.35), respectively. Transfusion was associated with increased relative cost of admission (any transfusion, 1.42 times [95% CI, 1.37 to 1.46], varying from 1.11 for 1 U to 3.35 for >9 U). At any time after their operations, transfused patients were less likely to have been discharged from hospital (hazard ratio [HR], 0.63; 95% CI, 0.60 to 0.67) and were more likely to have died (0 to 30 days: HR, 6.69; 95% CI, 3.66 to 15.1; 31 days to 1 year: HR, 2.59; 95% CI, 1.68 to 4.17; >1 year: HR, 1.32; 95% CI, 1.08 to 1.64). Conclusions— Red blood cell transfusion in patients having cardiac surgery is strongly associated with both infection and ischemic postoperative morbidity, hospital stay, increased early and late mortality, and hospital costs.
DOI: 10.1136/jech-2011-200375
2012
Cited 736 times
Using natural experiments to evaluate population health interventions: new Medical Research Council guidance
Natural experimental studies are often recommended as a way of understanding the health impact of policies and other large scale interventions. Although they have certain advantages over planned experiments, and may be the only option when it is impossible to manipulate exposure to the intervention, natural experimental studies are more susceptible to bias. This paper introduces new guidance from the Medical Research Council to help researchers and users, funders and publishers of research evidence make the best use of natural experimental approaches to evaluating population health interventions. The guidance emphasises that natural experiments can provide convincing evidence of impact even when effects are small or take time to appear. However, a good understanding is needed of the process determining exposure to the intervention, and careful choice and combination of methods, testing of assumptions and transparent reporting is vital. More could be learnt from natural experiments in future as experience of promising but lesser used methods accumulates.
DOI: 10.1016/j.ophtha.2012.04.015
2012
Cited 710 times
Ranibizumab versus Bevacizumab to Treat Neovascular Age-related Macular Degeneration
To compare the efficacy and safety of ranibizumab and bevacizumab intravitreal injections to treat neovascular age-related macular degeneration (nAMD).Multicenter, noninferiority factorial trial with equal allocation to groups. The noninferiority limit was 3.5 letters. This trial is registered (ISRCTN92166560).People >50 years of age with untreated nAMD in the study eye who read ≥ 25 letters on the Early Treatment Diabetic Retinopathy Study chart.We randomized participants to 4 groups: ranibizumab or bevacizumab, given either every month (continuous) or as needed (discontinuous), with monthly review.The primary outcome is at 2 years; this paper reports a prespecified interim analysis at 1 year. The primary efficacy and safety outcome measures are distance visual acuity and arteriothrombotic events or heart failure. Other outcome measures are health-related quality of life, contrast sensitivity, near visual acuity, reading index, lesion morphology, serum vascular endothelial growth factor (VEGF) levels, and costs.Between March 27, 2008 and October 15, 2010, we randomized and treated 610 participants. One year after randomization, the comparison between bevacizumab and ranibizumab was inconclusive (bevacizumab minus ranibizumab -1.99 letters, 95% confidence interval [CI], -4.04 to 0.06). Discontinuous treatment was equivalent to continuous treatment (discontinuous minus continuous -0.35 letters; 95% CI, -2.40 to 1.70). Foveal total thickness did not differ by drug, but was 9% less with continuous treatment (geometric mean ratio [GMR], 0.91; 95% CI, 0.86 to 0.97; P = 0.005). Fewer participants receiving bevacizumab had an arteriothrombotic event or heart failure (odds ratio [OR], 0.23; 95% CI, 0.05 to 1.07; P = 0.03). There was no difference between drugs in the proportion experiencing a serious systemic adverse event (OR, 1.35; 95% CI, 0.80 to 2.27; P = 0.25). Serum VEGF was lower with bevacizumab (GMR, 0.47; 95% CI, 0.41 to 0.54; P<0.0001) and higher with discontinuous treatment (GMR, 1.23; 95% CI, 1.07 to 1.42; P = 0.004). Continuous and discontinuous treatment costs were £9656 and £6398 per patient per year for ranibizumab and £1654 and £1509 for bevacizumab; bevacizumab was less costly for both treatment regimens (P<0.0001).The comparison of visual acuity at 1 year between bevacizumab and ranibizumab was inconclusive. Visual acuities with continuous and discontinuous treatment were equivalent. Other outcomes are consistent with the drugs and treatment regimens having similar efficacy and safety.Proprietary or commercial disclosures may be found after the references.
DOI: 10.1016/s0140-6736(02)08216-8
2002
Cited 561 times
Early and midterm outcome after off-pump and on-pump surgery in Beating Heart Against Cardioplegic Arrest Studies (BHACAS 1 and 2): a pooled analysis of two randomised controlled trials
Although no randomised controlled trial has assessed the midterm effects of coronary-artery bypass surgery on the beating heart, this technique is being used in more and more patients. We did two randomised trials to compare the short-term morbidity associated with off-pump and on-pump myocardial revascularisation. Our aim was to pool the results to assess midterm outcomes.From March, 1997, to November, 1999, we randomly allocated 200 patients to off-pump and 201 to on-pump coronary surgery. In Beating Heart Against Cardioplegic Arrest Study (BHACAS) 1, we excluded patients who had had myocardial infarction in the past month or who required grafting of the circumflex artery distal to the first obtuse marginal branch. In BHACAS 2, we included such patients. Primary outcomes were all-cause mortality and cardiac-related events at midterm follow-up (1-3 years). Analysis was by intention to treat.Analyses of combined data from both trials showed the following risk differences with off-pump compared with on-pump surgery: atrial fibrillation -25% (95% CI -33% to -16%); chest infection -12% (-19% to -5%); inotropic requirement -18% (-25% to -10%); transfusion of red blood cells -31% (-41 to -21); and hospital stay longer than 7 days -13% (-21 to -5). Mean follow-up was 25 0 months (SD 9.1) for BHACAS 1 and 13.7 months (5 5) for BHACAS 2. Four (2%) of 200 patients in the off-pump groups died from any cause, compared with seven (3%) of 201 in the on-pump group (hazard ratio 0.57, 95% CI 0.17-1.96). 33 (17%) of 200 patients in the off-pump group died or had a cardiac-related event, compared with 42 (21%) of 201 in the on-pump group (0.78, 0 49-1.22).Off-pump coronary surgery significantly lowers in-hospital morbidity without compromising outcome in the first 1-3 years after surgery compared with conventional on-pump coronary surgery.
DOI: 10.1016/s0140-6736(09)61086-2
2009
Cited 540 times
Challenges in evaluating surgical innovation
Research on surgical interventions is associated with several methodological and practical challenges of which few, if any, apply only to surgery. However, surgical evaluation is especially demanding because many of these challenges coincide. In this report, the second of three on surgical innovation and evaluation, we discuss obstacles related to the study design of randomised controlled trials and non-randomised studies assessing surgical interventions. We also describe the issues related to the nature of surgical procedures—for example, their complexity, surgeon-related factors, and the range of outcomes. Although difficult, surgical evaluation is achievable and necessary. Solutions tailored to surgical research and a framework for generating evidence on which to base surgical practice are essential.
DOI: 10.1002/9781119536604.ch25
2019
Cited 130 times
Assessing risk of bias in a non‐randomized study
Cochrane Reviews often include non-randomized studies of interventions (NRSI). The Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) tool is recommended for assessing risk of bias in a NRSI. This chapter summarizes the biases that can affect NRSI and describes the main features of the ROBINS-I tool. There is greater potential for bias in NRSI than in randomized trials. A key concern is the possibility of confounding. NRSI may also be affected by biases that are referred to in the epidemiological literature as selection bias and information bias. Furthermore, the authors are at least as concerned about reporting biases as they are when including randomized trials. In some studies measurements of the outcome variable are made both before and after an intervention takes place. The chapter considers uncontrolled studies in which all units contributing to the analysis received the (same) intervention, and controlled versions of these studies.
DOI: 10.1016/j.jclinepi.2021.11.026
2022
Cited 53 times
GRADE guidance 24 optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines
Background and ObjectiveThis is the 24th in the ongoing series of articles describing the GRADE approach for assessing the certainty of a body of evidence in systematic reviews and health technology assessments and how to move from evidence to recommendations in guidelines.MethodsGuideline developers and authors of systematic reviews and other evidence syntheses use randomized controlled studies (RCTs) and non-randomized studies of interventions (NRSI) as sources of evidence for questions about health interventions. RCTs with low risk of bias are the most trustworthy source of evidence for estimating relative effects of interventions because of protection against confounding and other biases. However, in several instances, NRSI can still provide valuable information as complementary, sequential, or replacement evidence for RCTs.ResultsIn this article we offer guidance on the decision regarding when to search for and include either or both types of studies in systematic reviews to inform health recommendations.ConclusionThis work aims to help methodologists in review teams, technology assessors, guideline panelists, and anyone conducting evidence syntheses using GRADE.
DOI: 10.1016/s0735-1097(03)00777-0
2003
Cited 239 times
Effect of body mass index on early outcomes in patients undergoing coronary artery bypass surgery
This study sought to quantify the effect of body mass index (BMI) on early clinical outcomes following coronary artery bypass grafting (CABG). Obesity is considered a risk factor for postoperative morbidity and mortality after cardiac surgery, although existing evidence is contradictory. A concurrent cohort study of consecutive patients undergoing CABG from April 1996 to September 2001 was carried out. Main outcomes were early death; perioperative myocardial infarction; infective, respiratory, renal, and neurological complications; transfusion; duration of ventilation, intensive care unit, and hospital stay. Multivariable analyses compared the risk of outcomes between five different BMI groups after adjusting for case-mix. Out of 4,372 patients, 3.0% were underweight (BMI <20 kg/m2), 26.7% had a normal weight (BMI ≥20 and <25 kg/m2), 49.7% were overweight (BMI ≥25 and <30 kg/m2), 17.1% obese (BMI ≥30 and <35 kg/m2) and 3.6% severely obese (BMI ≥35 kg/m2). Compared with the normal weight group, the overweight and obese groups included more women, diabetics, and hypertensives, but fewer patients with severe ischemic heart disease and poor ventricular function. Underweight patients were more likely than normal weight patients to die in hospital (odds ratio [OR] = 4.0, 95% CI 1.4 to 11.1), have a renal complication (OR = 1.9, 95% confidence interval [CI] 1.0 to 3.7), or stay in hospital longer (>7 days) (OR = 1.7, 95% CI 1.1 to 2.5). Overweight, obese, and severely obese patients were not at higher risk of adverse outcomes than normal weight patients, and were less likely than normal weight patients to require transfusion (ORs from 0.42 to 0.86). Underweight patients undergoing CABG have a higher risk of death or complications than normal weight patients. Obesity does not affect the risk of perioperative death and other adverse outcomes compared to normal weight, yet obese patients appear less likely to be selected for surgery than normal weight patients.
DOI: 10.1007/bf02236639
2000
Cited 215 times
Linear discriminant analysis of symptoms in patients with chronic constipation
The aim of this study was to devise a symptom scoring system to assist in diagnosing constipation and in discriminating among pathophysiologic subgroups.A structured symptom scoring questionnaire (11 questions) was completed by 71 chronically constipated patients and by 20 asymptomatic controls. The symptom score was correlated with a previously validated constipation score (Cleveland Clinic Score). All patients underwent colonic transit studies, standard anorectal physiology testing, and evacuation proctography. On the basis of these investigations alone, an observer blinded to the questionnaire results allocated patients to one of three pathophysiologic subgroups: slow-transit constipation, rectal evacuatory disorder, or mixed (slow-transit constipation and rectal evacuatory disorder). Linear discriminant analysis was used to assess the ability of different questionnaire symptoms to discriminate among these subgroups.Total symptom score was strongly correlated with the Cleveland Clinic Score (r = 0.9). The median total score in constipated patients was 20 (range, 11-35) compared with a median of 2 in controls (range, 0-6). Discriminant analysis using cross validation estimated that pathophysiology could be predicted correctly for 55 percent (95 percent confidence interval = 43-67 percent) of patients using just five symptoms. The discriminant function rarely misclassified patients with rectal evacuatory disorder as slow-transit constipation and vice versa, but could not effectively discriminate between patients with single and mixed pathologies.This new scoring system is a valid technique to assist in the diagnosis of constipation and is the first study using appropriate statistical methodology to demonstrate a discriminatory ability of multiple symptoms in constipation. At present, symptom analysis does not adequately differentiate major pathophysiologic subgroups for use in clinical practice.
DOI: 10.1136/bmj.38232.646227.de
2004
Cited 200 times
Surgical wound infection as a performance indicator: agreement of common definitions of wound infection in 4773 patients
To assess the level of agreement between common definitions of wound infection that might be used as performance indicators.Prospective observational study.London teaching hospital group receiving emergency cases as well as tertiary referrals.4773 surgical patients staying in hospital at least two nights.Numbers of wound infections based on purulent discharge alone, on the Centers for Disease Control (CDC) definition of wound infection, on the nosocomial infection national surveillance scheme (NINSS) version of the CDC definition, and on the ASEPSIS scoring method.5804 surgical wounds were assessed during 5028 separate hospital admissions. The mean percentage of wounds classified as infected differed substantially with different definitions: 19.2% with the CDC definition (95% confidence interval 18.1% to 20.4%), 14.6% (13.6% to 15.6%) with the NINSS version, 12.3% (11.4% to 13.2%) with pus alone, and 6.8% (6.1% to 7.5%) with an ASEPSIS score > 20. The agreement between definitions with respect to individual wounds was poor. Wounds with pus were automatically defined as infected with the CDC, NINSS, and pus alone definitions, but only 39% (283/714) of these had ASEPSIS scores > 20.Small changes made to the CDC definition or even in its interpretation, as with the NINSS version, caused major variation in estimated percentage of wound infection. Substantial numbers of wounds were differently classified across the grades of infection. A single definition used consistently can show changes in percentage wound infection over time at a single centre, but differences in interpretation prevent comparison between different centres.
DOI: 10.1111/j.1749-4486.2006.01275.x
2006
Cited 186 times
The national comparative audit of surgery for nasal polyposis and chronic rhinosinusitis
This study summarises the results of a National Audit of sino-nasal surgery carried out in England and Wales. It describes patient and operative characteristics as well as patient outcomes up to 36 months after surgery.Prospective cohort study.NHS hospitals in England and Wales.Consecutive patients undergoing surgery for nasal polyposis and/or chronic rhinosinusitis.The total score derived from a 22-item version of the Sino-Nasal Outcome Test (SNOT-22). Lower scores represent better health-related quality of life.A total of 3128 consecutive patients at 87 NHS hospitals were enrolled. There is a large improvement in SNOT-22 scores from the pre-operative period (mean = 42.0) to 3 months after surgery (mean = 25.5). The scores for patients undergoing nasal polypectomy improved from 41.0 before surgery to 23.1 at 3 months after surgery, while the scores for patients undergoing surgery for chronic rhinosinusitis alone improved from 44.2 to 31.2. The SNOT-22 scores reported at 12 and 36 months after surgery were similar to those reported at 3 months. Excessive bleeding occurred in 5% of patients during the operation and in 1% of patients after the operation. Intra-orbital complications were reported in 0.2%. Of those patients undergoing primary surgery for bilateral grade I or II polyposis, 18% had not received a pre-operative course of steroid treatment. At the 36-month follow-up, 11.4% of patients had undergone revision surgery.The audit confirms that sino-nasal surgery is generally safe and effective. There is some evidence that patient selection for surgery could be improved.
DOI: 10.1136/bmj.316.7137.1052
1998
Cited 185 times
Point of care testing: randomised controlled trial of clinical outcome
<h3>Abstract</h3> <b>Objectives:</b> To describe the proportion of patients attending an accident and emergency department for whom blood analysis at the point of care brought about a change in management; to measure the extent to which point of care testing resulted in differences in clinical outcome for these patients when compared with patients whose samples were tested by the hospital laboratory. <b>Design:</b> Open, single centre, randomised controlled trial. Blood samples were randomly allocated to point of care testing or testing by the hospital9s central laboratory. <b>Setting:</b> The accident and emergency department of the Bristol Royal Infirmary, a large teaching hospital which cares for an inner city population. <b>Subjects:</b> Representative sample of patients who attended the department between April 1996 and April 1997 and who required blood tests. Data collection was structured in 8 hour blocks so that all hours of the day and all days of the week were equally represented. <b>Main outcome measures:</b> The proportion of patients for whom point of care testing brought about a change in treatment in which timing was considered to be critical to clinical outcome. Mortality, the length of stay in hospital, admission rate, the amount of time spent waiting for results of blood tests, the amount of time taken to decide on management plans, and the amount of time patients spent in the department were compared between patients whose samples were tested at the point of care and those whose samples were sent to the laboratory. <b>Results:</b> Samples were obtained from 1728 patients. Changes in management in which timing was considered to be critical occurred in 59 out of 859 (6.9%, 95% confidence interval 5.3% to 8.8%) patients in the point of care arm of the trial. Decisions were made 74 minutes earlier (68 min to 80 min, P&lt;0.0001) when point of care testing was used for haematological tests as compared to central laboratory testing, 86 minutes earlier (80 min to 92 min, P&lt;0.0001) for biochemical tests, and 21 minutes earlier (−3 min to 44 min, P=0.09) for analyses of arterial blood gases. There were no differences between the groups in the amount of time spent in the department, length of stay in hospital, admission rates, or mortality. <b>Conclusion:</b> Point of care testing reduced the time taken to make decisions on patient management that were dependent on the results of blood tests. It also brought about faster changes in treatment for which timing was considered to be critical in about 7% of patients. These changes did not affect clinical outcome or the amount of time patients spent in the department. <h3>Key messages</h3> Point of care testing reduced the amount of time doctors spent waiting for results of blood tests when compared to the time spent waiting for results from the hospital laboratory in an accident and emergency department The time taken to decide on a management plan was also reduced as a result of the shorter time spent waiting for results of point of care tests About 7% of patients who needed urgent blood testing had changes in treatment in which timing was considered to be critical when point of care testing was used Patients did not spend less time in the accident and emergency department even when test results were available more quickly and patient management decisions were made more quickly. This suggests that the availability of test results is not the factor which slows down the arrangement of further care Improvements in process, such as a reduction in the time doctors wait for test results and the ability to make clinical decisions more quickly, do not seem to improve clinical outcome in this sample of patients
DOI: 10.1136/bmj.331.7512.331
2005
Cited 185 times
Challenges to implementing the national programme for information technology (NPfIT): a qualitative study
To describe the context for implementing the national programme for information technology (NPfIT) in England, actual and perceived barriers, and opportunities to facilitate implementation.Case studies and in depth interviews, with themes identified using a framework developed from grounded theory.Four acute NHS trusts in England.Senior trust managers and clinicians, including chief executives, directors of information technology, medical directors, and directors of nursing.The trusts varied in their circumstances, which may affect their ability to implement the NPfIT. The process of implementation has been suboptimal, leading to reports of low morale by the NHS staff responsible for implementation. The overall timetable is unrealistic, and trusts are uncertain about their implementation schedules. Short term benefits alone are unlikely to persuade NHS staff to adopt the national programme enthusiastically, and some may experience a loss of electronic functionality in the short term.The sociocultural challenges to implementing the NPfIT are as daunting as the technical and logistical ones. Senior NHS staff feel these have been neglected. We recommend that national programme managers prioritise strategies to improve communication with, and to gain the cooperation of, front line staff.
DOI: 10.1016/j.jtcvs.2008.09.046
2009
Cited 175 times
Effects of on- and off-pump coronary artery surgery on graft patency, survival, and health-related quality of life: Long-term follow-up of 2 randomized controlled trials
<h3>Objective</h3> Off-pump coronary artery bypass grafting reduces postoperative morbidity and uses fewer resources than conventional surgical intervention with cardiopulmonary bypass. However, only 15% to 20% of coronary artery bypass grafting operations use off-pump coronary artery bypass. One reason for not using off-pump coronary artery bypass might be the surgeon's concern about the long-term patency of grafts performed with this technique. Therefore our objective was to compare long-term outcomes in patients randomized to off-pump coronary artery bypass or coronary artery bypass grafting with cardiopulmonary bypass. <h3>Methods</h3> Participants in 2 randomized trials comparing off-pump coronary artery bypass and coronary artery bypass grafting with cardiopulmonary bypass were followed up for 6 to 8 years after surgical intervention to assess graft patency, major adverse cardiac-related events, and health-related quality of life. Patency was assessed by using multidetector computed tomographic coronary angiographic analysis with a 16-slice scanner. Two blinded observers classified proximal, body, and distal segments of each graft as occluded or not. Major adverse cardiac-related events and health-related quality of life were obtained from questionnaires given to participants and family practitioners. <h3>Results</h3> Patency was studied in 199 and health-related quality of life was studied in 299 of 349 survivors. There was no evidence of attrition bias. The likelihood of graft occlusion was no different between off-pump coronary artery bypass (10.6%) and coronary artery bypass grafting with cardiopulmonary bypass (11.0%) groups (odds ratio, 1.00; 95% confidence interval, 0.55–1.81; <i>P</i> > .99). Graft occlusion was more likely at the distal than the proximal anastomosis (odds ratio, 1.11; 95% confidence interval, 1.02–1.20). There were also no differences between the off-pump coronary artery bypass and coronary artery bypass grafting with cardiopulmonary bypass groups in the hazard of death (hazard ratio, 1.24; 95% confidence interval, 0.72–2.15) or major adverse cardiac-related events or death (hazard ratio, 0.84; 95% confidence interval, 0.58–1.24), or mean health-related quality of life across a range of domains and instruments. <h3>Conclusions</h3> Long-term health outcomes with off-pump coronary artery bypass are similar to those with coronary artery bypass grafting with cardiopulmonary bypass when both operations are performed by experienced surgeons.
DOI: 10.1016/j.jtcvs.2004.03.011
2004
Cited 168 times
Control chart methods for monitoring cardiac surgical performance and their interpretation
See related articles on pages 807, 820, 823, and 907. See related articles on pages 807, 820, 823, and 907. For more than a decade, there has been increasing interest in monitoring the quality of cardiac surgical performance, as demonstrated by public dissemination of surgeon-specific mortality for coronary artery bypass grafting (CABG) in The New York Times,1Green J. Wintfeld N. Report cards on cardiac surgeons.N Engl J Med. 1995; 332: 1229-1232Crossref PubMed Scopus (304) Google Scholar introduction of clinical governance strategies into the United Kingdom National Health Service,2Scally G. Donaldson L. The NHS's 50 anniversary. Clinical governance and the drive for quality improvement in the new NHS in England.BMJ. 1998; 317: 61-65Crossref PubMed Google Scholar mounting pressure for open scrutiny of results after publication of the Bristol Royal Infirmary Inquiry Panel report,3Learning from Bristol: the report of the public inquiry into children's heart surgery at the Bristol Royal Infirmary 1984-1995. In: BRI Inquiry Panel. London: The Stationary Office; 2001Google Scholar, 4Spiegelhalter D. Aylin P. Best N. Evans S. Murray G. Commissioned analysis of surgical performance by using routine data.J R Stat Soc A. 2002; 165: 1-31Crossref Scopus (45) Google Scholar and numerous applications of quality control methods in medicine, both to monitor individuals' results5Shahian D. Williamson W. Svensson L. Restuccia J. D'Agostino R. Applications of statistical quality control to cardiac surgery.Ann Thorac Surg. 1996; 62: 1351-1359Abstract Full Text PDF PubMed Scopus (41) Google Scholar, 6de Leval M. Francois K. Bull C. Brawn W. Spiegelhalter D. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations.J Thorac Cardiovasc Surg. 1994; 107: 914-924PubMed Google Scholar, 7Lovegrove J. Valencia O. Treasure T. Sherlaw-Johnson C. Gallivan S. Monitoring the results of cardiac surgery by variable life-adjusted display.Lancet. 1997; 350: 1128-1130Abstract Full Text Full Text PDF PubMed Scopus (219) Google Scholar, 8Poloniecki J. Valencia O. Littlejohns P. Cumulative risk-adjusted mortality chart for detecting changes in death rate observational study of heart surgery.BMJ. 1998; 316: 1697-1700Crossref PubMed Scopus (136) Google Scholar, 9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar, 10Caputo M, Reeves B, Rogers C, Ascione R, Angelini G. Monitoring the performance of residents during training in off-pump coronary surgery. J Thorac Cardiovasc Surg. 2004;128:907-15Google Scholar, 11Novick R. Fox S. Stitt L. Swinamer S. Lehnhardt K. Rayman R. et al.Cumulative sum failure analysis of a policy change from on-pump to off-pump coronary artery bypass grafting.Ann Thorac Surg. 2001; 72: S1016-1021Abstract Full Text Full Text PDF PubMed Scopus (63) Google Scholar, 12Novick R. Fox S. Stitt L. Kiaii R. Swinamer S. Rayman R. et al.Assessing the learning curve in off-pump coronary artery surgery via CUSUM failure analysis.Ann Thorac Surg. 2002; 73: S358-362Abstract Full Text Full Text PDF PubMed Scopus (37) Google Scholar, 13Brown S. Benneyan J. Theobald D. Sands K. Hahn M. Potter-Bynoe G. et al.Binary cumulative sums and moving averages in nosocomial infection cluster detection.Emerg Infect Dis. 2002; 8: 1426-1432Crossref PubMed Scopus (23) Google Scholar, 14Lawrance R. Dorsch M. Sapsford R. Mackintosh A. Greenwood D. Jackson B. et al.Use of cumulative mortality data in patients with acute myocardial infarction for early detection of variation in clinical practice observational study.BMJ. 2001; 323: 324-327Crossref PubMed Scopus (45) Google Scholar, 15Bolsin S. Colson M. The use of the Cusum technique in the assessment of trainee competence in new procedures.Int J Qual Health Care. 2000; 12: 433-438Crossref PubMed Scopus (197) Google Scholar and to compare the performance of individuals or institutions.16Mohammed M. Cheng K. Rouse A. Marshall T. Bristol, Shipman, and clinical governance Shewhart's forgotten lessons.Lancet. 2001; 357: 463-467Abstract Full Text Full Text PDF PubMed Scopus (177) Google Scholar, 17Tekkis P. McCulloch P. Steger A. Benjamin I. Poloniecki J. Mortality control charts for comparing performance of surgical units validation study using hospital mortality data.BMJ. 2003; 326: 786-790Crossref PubMed Scopus (78) Google Scholar, 18Marshall E. Spiegelhalter D. League tables of in vitro fertilisation clinics how confident can we be about the rankings?.BMJ. 1998; 316: 1701-1704Crossref PubMed Google Scholar Quality is seen as important not only because of its potential to detect unacceptable surgical results, but also because of the need to ensure quality when training the next generation of surgeons in a high-risk specialty. All processes, including all aspects of medical care, are assumed to be subject to intrinsic random (common-cause) variation. The purpose of quality control charts is to distinguish between random variation and special-cause variation, which arises from factors extrinsic to the process. Reducing random variation for a process that is in control requires changing the process itself. Reducing special-cause variation requires identifying factors that cause the process to go out of control and taking appropriate corrective action. A quality control chart can take one of several forms, depending on the type of data (continuous, binary, or count data—eg, blood loss or length of hospital stay [continuous data], mortality [binary data], or complications [count data]), the quantity of interest (eg, average performance or variability in performance), and the primary objective of the monitoring procedure. Shewhart control charts, for example, were designed for monitoring batches of results.19Shewhart W. Economic control of quality of manufactured product. Van Nostrand Reinhold, Princeton (NJ)1931Google Scholar In the surgical context, a batch might be a series of operations performed over a period of time. Although these charts have been applied in cardiac surgery,5Shahian D. Williamson W. Svensson L. Restuccia J. D'Agostino R. Applications of statistical quality control to cardiac surgery.Ann Thorac Surg. 1996; 62: 1351-1359Abstract Full Text PDF PubMed Scopus (41) Google Scholar, 16Mohammed M. Cheng K. Rouse A. Marshall T. Bristol, Shipman, and clinical governance Shewhart's forgotten lessons.Lancet. 2001; 357: 463-467Abstract Full Text Full Text PDF PubMed Scopus (177) Google Scholar their value for ongoing monitoring of individual results is limited, particularly for low-volume procedures. Another type of control chart is the cumulative sum (CUSUM). It can be updated after each procedure, is applicable to outcomes for individual surgeons, and provides a method of real-time monitoring of performance. CUSUM charts are based on sequential monitoring of cumulative performance over time and are the focus of this article. Initially developed by Page20Page E. Continuous inspection schemes.Biometrika. 1954; 41: 100-114Google Scholar in an industrial context, they have been shown to be most suited for detecting small, persistent process changes.21Montgomery D. Introduction to statistical quality control.2nd ed. John Wiley, New York1991Google Scholar Williams and colleagues22Williams S. Perry B. Schlup M. Quality control an application of the CUSUM.BMJ. 1992; 304: 1359-1361Crossref PubMed Scopus (174) Google Scholar first proposed their use in a medical context, and de Leval and associates6de Leval M. Francois K. Bull C. Brawn W. Spiegelhalter D. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations.J Thorac Cardiovasc Surg. 1994; 107: 914-924PubMed Google Scholar were the first to illustrate their ability to detect a cluster of deaths after the arterial switch repair for transposition of the great arteries. Although CUSUM charts are simple to construct, care is needed to avoid overinterpreting or misinterpreting them. The purposes of this article are to (1) describe different forms of CUSUM charts for monitoring performance over time when the outcome of interest is binary (eg, mortality or cardiac-related events),6de Leval M. Francois K. Bull C. Brawn W. Spiegelhalter D. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations.J Thorac Cardiovasc Surg. 1994; 107: 914-924PubMed Google Scholar, 22Williams S. Perry B. Schlup M. Quality control an application of the CUSUM.BMJ. 1992; 304: 1359-1361Crossref PubMed Scopus (174) Google Scholar (2) explain how the charts should be interpreted, (3) highlight frequent misunderstandings, and (4) recommend ways the charts should be used. We also consider extensions of the CUSUM chart that control for case mix: variable life-adjusted displays (VLAD,7Lovegrove J. Valencia O. Treasure T. Sherlaw-Johnson C. Gallivan S. Monitoring the results of cardiac surgery by variable life-adjusted display.Lancet. 1997; 350: 1128-1130Abstract Full Text Full Text PDF PubMed Scopus (219) Google Scholar also called cumulative risk-adjusted mortality [CRAM] plots8Poloniecki J. Valencia O. Littlejohns P. Cumulative risk-adjusted mortality chart for detecting changes in death rate observational study of heart surgery.BMJ. 1998; 316: 1697-1700Crossref PubMed Scopus (136) Google Scholar), and the risk-adjusted sequential probability ratio test (SPRT).9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar We describe the parameters needed to construct the charts, their control limits, and alternative graphical presentations of data. We focus on binary outcomes because they are used to monitor cardiac surgery performance. The methods are illustrated by using two example data sets: a single United Kingdom hospital database of cardiac operations and a national database of cardiothoracic transplantations in the United Kingdom. The Bristol Heart Institute has prospectively collected a standard set of data on all adult cardiac procedures since April 1996.23Caputo M. Bryan A. Capoun R. Mahesh B. Ciulli F. Hutter J. et al.The evolution of off-pump coronary surgery in a single institution.Ann Thorac Surg. 2002; 74: S1403-1407Abstract Full Text Full Text PDF PubMed Google Scholar, 24Caputo M. Chamberlain M. Ozalp F. Underwood M. Ciulli F. Angelini G. Off-pump coronary operations can be safely taught to cardiothoracic trainees.Ann Thorac Surg. 2001; 71: 1215-1219Abstract Full Text Full Text PDF PubMed Scopus (57) Google Scholar Data used for illustration comprise 1372 elective and urgent CABG procedures performed between April 1996 and September 2002.10Caputo M, Reeves B, Rogers C, Ascione R, Angelini G. Monitoring the performance of residents during training in off-pump coronary surgery. J Thorac Cardiovasc Surg. 2004;128:907-15Google Scholar All operations were performed by the lead academic consultant or one of four residents. The outcome chosen for performance monitoring was surgical failure, defined as the occurrence of one or more of 11 cardiac-related events.10Caputo M, Reeves B, Rogers C, Ascione R, Angelini G. Monitoring the performance of residents during training in off-pump coronary surgery. J Thorac Cardiovasc Surg. 2004;128:907-15Google Scholar Overall failures were 8.5% (95% confidence interval, 7%-10%). Multiple logistic regression, applied to the complete data set, was used to identify predictors of failure. The predicted risk of surgical failure for each of the 1372 patients was then estimated from the resulting model. Results are presented for a subset of off-pump CABG. A national clinical database of cardiothoracic transplantations and outcomes was established in April 1995, and all 8 centers in the United Kingdom that perform these procedures have contributed data since then. Data returns are in excess of 95%, and all data are subject to rigorous validation.25Anyanwu A. Rogers C. Murday A. Intrathoracic organ transplantation in the United Kingdom 1995-1999 results from the UK cardiothoracic transplant audit.Heart. 2002; 87: 449-454Crossref PubMed Scopus (28) Google Scholar The data used for illustration comprise 1341 adult orthotopic heart transplantations performed between July 1995 and September 2002. The outcome chosen for monitoring was 30-day postoperative mortality, which was 12% (95% confidence interval, 10%-14%). Multiple logistic regression analysis was used to identify predictors of mortality for the July 1995 to March 2001 cohort (n = 1173), and the model was evaluated using subsequent transplantations (April 2001 to September 2002; n = 168). Details of the risk factors considered and model development are available on request. Choice of outcome, the event against which performance is being measured, varies depending on context. Throughout we shall simply refer to an unsuccessful outcome as a failure and a successful outcome as a success. We shall also focus mainly on detecting an increase in failures, although the methods are equally applicable for detecting their reduction. Although our emphasis will be on risk-adjusted control charts, before introducing them, we will discuss and illustrate non–risk-adjusted charts. The simplest and most intuitive form of CUSUM chart is a graph of the cumulative (total) number of failures (on the vertical axis) against operation number (on the horizontal axis; stepped lines in Figure 1). As each operation is performed and outcome assessed, the cumulative number of failures either remains unchanged if a success occurs (and the graph continues horizontally) or is incremented by 1 if a failure occurs (and the graph rises). The graph has an immediate visual interpretation, because an increase in gradient (slope) indicates more frequent failures. However, it is of limited value without control boundaries to indicate whether an increase in gradient is consistent with a process going out of control (ie, a genuine increase in the failure rate) or with simple random variation. The control boundaries illustrated are derived from the SPRT26Wald A. Sequential tests in industrial statistics.Ann Math Stat. 1945; 6: 117-186Crossref Google Scholar and are constructed to test the null hypothesis (H0) that the failure rate is p0, against the alternative (H1) that the failure rate has increased to p1. To construct control boundaries, 4 parameters must be specified: (1) risk of failure when the process is in control (acceptable failure rate; p0); (2) failure rate considered unacceptable (p1, where p1 > p0); (3) α, the probability of concluding that the failure rate has increased when, in fact, it has not (false-positive, or type I error); and (4) β, the probability of concluding that the failure rate has not increased when, in fact, it has (false-negative, or type II error). Choices of α and β depend on the application and relative costs of false-positive and false-negative conclusions; they are commonly set to .10 (10%), .05 (5%), or .01 (1%).9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar Given values for p0, p1, α, and β, the upper and lower control limits (or boundary lines)—l1 and l0, respectively—are constructed according to formulas given in Appendix 1 (dashed lines, Figure 1). It is a common misconception that a CUSUM graph that remains within control boundaries constitutes evidence that the process is in control. If the graph of cumulative failures crosses the upper boundary, l1, then we conclude that the failure rate has increased to the unacceptable rate, p1. If it crosses the lower boundary, l0, we conclude that the failure rate is less than or equal to the acceptable rate, p0. When a graph remains between these boundaries, the evidence remains inconclusive, and monitoring should continue (Figure 1). The natural progression of the graph for an individual or institution with acceptable performance is toward the lower boundary for this method of constructing control limits. Upper and lower boundary lines are always parallel. Their slope s (Appendix 1) is not directly interpretable. It depends on the values of p0 and p1; the closer p1 is to p0 (ie, the smaller the increase to be detected), the smaller s is and the shallower the slope. The points at which the lower and upper boundary lines intersect with the vertical CUSUM axis (h0 and h1; Appendix 1) are determined by α and β, which are typically set to the same value. The smaller the values for α and β, the higher the upper boundary and the lower the lower boundary. It is common for 2 sets of boundary lines to be included on the chart, corresponding to different choices for α and β. Lines for the higher values of α and β are often referred to as alert lines, with lower values of α and β defining the alarm or action lines. The distance between the boundary lines (h0 + h1) also depends on the odds ratio (or, equivalently, p0 and p1; Appendix 1); the smaller the odds ratio, the greater the distance between the boundaries (for a given choice of α and β) and the longer the sequence of operations needed before a conclusion is reached. An alternative but equivalent presentation of the data involves graphing a modified CUSUM against the operation number (Figure 2). As with the cumulative failures graph, the sum starts at 0, but is then incremented by 1 − s for a failure and decremented by s for a success. The value of s is defined by p0 and p1 (Appendix 1). Boundary lines are horizontal, and their position on the chart is defined by h0 and h1. Interpretation of the graph in relation to the boundary lines is the same as for the cumulative failures chart. If performance is acceptable, the graph will tend downward toward the lower boundary; it will not follow the horizontal axis. It is possible to construct a chart so that acceptable performance gives rise to a graph that oscillates around a horizontal axis (Figure 3). This type of chart requires the expected value for the CUSUM to be 0 if the process is in control. The graph starts at 0, but is incremented by 1 − p0 for a failure and decremented by p0 for a success. This graph is more intuitive because it is easier to identify changes in the failure rate: the graph moves upward if the failure rate increases and downward if it decreases. To test the hypothesis that the failure rate has increased from p0 to p1, boundary lines would need to be drawn sloping upward with the gradient s − p0. Drawing horizontal boundary lines (h0 and h1) would represent a change of hypothesis being tested, ie, a change of p0 and p1, and the horizontal axis would no longer represent acceptable performance. If boundary lines are drawn on a cumulative observed minus expected failure chart, care needs to be taken to specify clearly the hypothesis being tested and to calculate appropriate boundary lines. These different formats of charts are equally valid; choice is largely a matter of personal preference. The chosen format needs to be specified, and if boundary lines are included, they must be accompanied by an explanation of their construction and the underlying hypothesis being tested. Cumulative observed minus expected failure graphs are intuitive because changes in gradient are more immediately apparent. However, plotting boundary lines to detect deviations from acceptable performance is more intuitive with cumulative failures or cumulative log-likelihood ratio charts. Therefore, we consider the two types of chart to be complementary. A line with a gradient corresponding to the acceptable (expected) failure rate could be added to cumulative failure charts, but it would not run parallel to the boundary lines. It is important to distinguish between interpretation of the graph relative to the boundary lines and interpretation in relation to the acceptable failure rate (Figure 1). The methods just described have been extended to adjust or control for case mix in sequential monitoring of health outcomes. The concept is simple. Rather than assuming that the acceptable failure rate is the same for all patients, the predicted risk of failure is allowed to vary among individuals. Accepted statistical models (eg, those based on the Parsonnet score8Poloniecki J. Valencia O. Littlejohns P. Cumulative risk-adjusted mortality chart for detecting changes in death rate observational study of heart surgery.BMJ. 1998; 316: 1697-1700Crossref PubMed Scopus (136) Google Scholar, 9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar or EuroSCORE27Sergeant P. de Worm E. Meyns B. Wouters P. The challenge of departmental quality control in the reengineering towards off-pump coronary artery bypass grafting.Eur J Cardiothorac Surg. 2001; 20: 538-543Crossref PubMed Scopus (42) Google Scholar) or empirically derived models6de Leval M. Francois K. Bull C. Brawn W. Spiegelhalter D. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations.J Thorac Cardiovasc Surg. 1994; 107: 914-924PubMed Google Scholar, 10Caputo M, Reeves B, Rogers C, Ascione R, Angelini G. Monitoring the performance of residents during training in off-pump coronary surgery. J Thorac Cardiovasc Surg. 2004;128:907-15Google Scholar, 17Tekkis P. McCulloch P. Steger A. Benjamin I. Poloniecki J. Mortality control charts for comparing performance of surgical units validation study using hospital mortality data.BMJ. 2003; 326: 786-790Crossref PubMed Scopus (78) Google Scholar are used to estimate the patient-specific predicted probability of failure. The risk-adjusted SPRT chart9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar is the risk-adjusted analog to the cumulative log-likelihood ratio chart. VLAD7Lovegrove J. Valencia O. Treasure T. Sherlaw-Johnson C. Gallivan S. Monitoring the results of cardiac surgery by variable life-adjusted display.Lancet. 1997; 350: 1128-1130Abstract Full Text Full Text PDF PubMed Scopus (219) Google Scholar or CRAM charts8Poloniecki J. Valencia O. Littlejohns P. Cumulative risk-adjusted mortality chart for detecting changes in death rate observational study of heart surgery.BMJ. 1998; 316: 1697-1700Crossref PubMed Scopus (136) Google Scholar, 27Sergeant P. de Worm E. Meyns B. Wouters P. The challenge of departmental quality control in the reengineering towards off-pump coronary artery bypass grafting.Eur J Cardiothorac Surg. 2001; 20: 538-543Crossref PubMed Scopus (42) Google Scholar are constructed on this principle and are analogous to the cumulative observed minus expected failure chart. Advantages and disadvantages of different forms of unadjusted charts apply equally to their risk-adjusted counterparts. The graph, which starts at 0, is incremented by 1 − p0i for a failure and is decremented by p0i for a success, where p0i denotes the predicted probability of failure for operation i, derived from the appropriate risk model (Figure 4). The graph has a natural interpretation: it moves upward if the failure rate increases above that predicted by the risk model, moves downward if the rate decreases, and oscillates around 0 if performance is consistent with predicted risks, ie, acceptable. Although changes in gradient are easy to see, constructing boundary lines is not straightforward. Methods for detecting changes have been proposed,7Lovegrove J. Valencia O. Treasure T. Sherlaw-Johnson C. Gallivan S. Monitoring the results of cardiac surgery by variable life-adjusted display.Lancet. 1997; 350: 1128-1130Abstract Full Text Full Text PDF PubMed Scopus (219) Google Scholar, 8Poloniecki J. Valencia O. Littlejohns P. Cumulative risk-adjusted mortality chart for detecting changes in death rate observational study of heart surgery.BMJ. 1998; 316: 1697-1700Crossref PubMed Scopus (136) Google Scholar but they do not equate to a hypothesis test in quite the same way as described for CUSUM charts. The risk-adjusted analog of the CUSUM chart, with boundary lines based on a SPRT, was first described in a medical context by Spiegelhalter and colleagues.9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar The risk-adjusted cumulative log-likelihood ratio statistic is used, and boundary lines are drawn horizontally (Figure 4). The graph starts at 0 and is incremented by 1 − si for a failure and decremented by si for a success. The value of si is defined by the predicted risk of failure for operation i (p0i) and the increase in risk that the chart is designed to detect. For the unadjusted chart, increase in risk is defined in terms of the unacceptable failure rate. However, when risk for each patient varies, it does not make sense to have a common unacceptable rate applied across all operations; it needs to vary according to the predicted risk of failure for the procedure. This variable unacceptable rate is achieved by defining the increase in terms of a relative risk (ie, odds ratio), rather than a specific rate. An odds ratio of 2, for example, would equate approximately to a doubling of patient-specific risk of failure, an odds ratio of 1.5 to a 50% increase in failure risk, and so on. The natural progression of the risk-adjusted graph for an individual or institution with acceptable performance is toward the lower boundary. The VLAD or CRAM chart and risk-adjusted SPRT chart are complementary and designed to account for case mix. The VLAD chart is intuitive, because the horizontal axis corresponds to expected outcome, and if performance is in line with expectations, the chart should oscillate around 0. A change in gradient, indicating a process that may be going out of control, is easily spotted. Its disadvantage is that boundary lines are not easily constructed. In contrast, the SPRT chart has no intuitive interpretation, but it has the advantage of providing a formal test of an explicit hypothesis. Although either chart is preferred to the unadjusted CUSUM for applications in which case-mix adjustment is appropriate, their usefulness is only as good as the ability of the risk model to accurately predict the outcome for different patient profiles. No risk model is perfect; none can completely adjust for all factors that influence outcome. If the graph (either risk adjusted or not) crosses the upper boundary line, then H0 is rejected, and performance is confirmed to have reached the predefined unacceptable level. In this situation, the individual or team should investigate the cause of the unacceptable performance, implement changes as necessary, and resume monitoring. If performance improves thereafter, the graph will start to decline and return to the “continue monitoring” zone. When the converse occurs and the graph crosses the lower acceptance boundary, we suggest that it be reset to 0 before monitoring is resumed, thereby increasing the sensitivity of the monitoring procedure by avoiding buildup of excessive “credit.”9Spiegelhalter D. Grigg O. Kinsman R. Treasure T. Risk-adjusted sequential probability ratio tests applications to Bristol, Shipman and adult cardiac surgery.Int J Qual Health Care. 2003; 15: 7-13Crossref PubMed Scopus (175) Google Scholar CUSUM charts in their various forms are simple to construct and easy to interpret when key parameters are defined correctly. The two main forms—the cumulative sum and cumulative observed minus expected failure graph—are complementary. Risk-adjusted versions are available and should be used when case-mix adjustment is appropriate, that is, when the population is heterogeneous and diverse outcomes may be anticipated. A robust, validated, highly discriminating model of risk should be used; a model that is poorly calibrated or has poor discrimination will provide inadequate adjustment for case mix. However, no case-mix adjustment will remove all confounding effects. An alternative approach to case-mix adjustment is to restrict the analysis to a relatively homogeneous group of patients,5Shahian D. Williamson W. Svensson L. Restuccia J. D'Agostino R. Applications of statistical quality control to cardiac surgery.Ann Thorac Surg. 1996; 62: 1351-1359Abstract Full Text PDF PubMed Scopus (41) Google Scholar but this could result in poor performance going undetected if a surgeon's results for the monitored subgroup are in line with what is expected, but are suboptimal for subgroups excluded from analysis. When adding alert or alarm lines to a chart, the user needs to be clear about the role of the different parameters used in constructing the lines (ie, the hypothesis under test), and values of these parameters must be specified clearly for the reader. There are literature examples that are misleading in this respect. Williams and colleagues,22Williams S. Perry B. Schlup M. Quality control an application of the CUSUM.BMJ. 1992; 304: 1359-1361Crossref PubMed Scopus (174) Google Scholar for example, suggest that the parameter s, termed the reference or target value, should be chosen to reflect the expected or acceptable failure rate, that p1 defines the un
DOI: 10.1002/9780470712184.ch13
2008
Cited 127 times
Including Non‐Randomized Studies
This chapter contains sections titled: Introduction Developing criteria for including non-randomized studies Searching for non-randomized studies Selecting studies and collecting data Assessing risk of bias in non-randomized studies Synthesis of data from non-randomized studies Interpretation and discussion Chapter information References
DOI: 10.1002/9781119536604.ch24
2019
Cited 73 times
Including non‐randomized studies on intervention effects
This chapter aims to support review authors who are considering including non-randomized studies of interventions (NRSI) in a Cochrane Review. NRSI are defined here as any quantitative study estimating the effectiveness of an intervention (harm or benefit) that does not use randomization to allocate units (individuals or clusters of individuals) to intervention groups. The chapter has been prepared by the Cochrane Non-Randomized Studies of Interventions Methods Group. It aims to describe the particular challenges that arise if NRSI are included in a Cochrane Review. The chapter recommends that eligibility criteria, data collection and assessment of included studies place an emphasis on specific features of study design rather than ‘labels’ for study designs. Review authors should consider how potential confounders, and how the likelihood of increased heterogeneity resulting from residual confounding and from other biases that vary across studies, are addressed in meta-analyses of non-randomized studies.
DOI: 10.1136/bmj.322.7281.261
2001
Cited 167 times
Multicentre randomised controlled trial of nasal diamorphine for analgesia in children and teenagers with clinical fractures
To compare the effectiveness of nasal diamorphine spray with intramuscular morphine for analgesia in children and teenagers with acute pain due to a clinical fracture, and to describe the safety profile of the spray.Multicentre randomised controlled trial.Emergency departments in eight UK hospitals.Patients aged between 3 and 16 years presenting with a clinical fracture of an upper or lower limb.Patients' reported pain using the Wong Baker face pain scale, ratings of reaction to treatment of the patients and acceptability of treatment by staff and parents, and adverse events.404 eligible patients completed the trial (204 patients given nasal diamorphine spray and 200 given intramuscular morphine). Onset of pain relief was faster in the spray group than in the intramuscular group, with lower pain scores in the spray group at 5, 10, and 20 minutes after treatment but no difference between the groups after 30 minutes. 80% of patients given the spray showed no obvious discomfort compared with 9% given intramuscular morphine (difference 71%, 95% confidence interval 65% to 78%). Treatment administration was judged acceptable by staff and parents, respectively, for 98% (199 of 203) and 97% (186 of 192) of patients in the spray group compared with 32% (64 of 199) and 72% (142 of 197) in the intramuscular group. No serious adverse events occurred in the spray group, and the frequencies of all adverse events were similar in both groups (spray 24.1% v intramuscular morphine 18.5%; difference 5.6%, -2.3% to 13.6%).Nasal diamorphine spray should be the preferred method of pain relief in children and teenagers presenting to emergency departments in acute pain with clinical fractures. The diamorphine spray should be used in place of intramuscular morphine.
DOI: 10.1046/j.1365-2923.2000.00574.x
2000
Cited 165 times
A systematic review of the effectiveness of critical appraisal skills training for clinicians
The aim of this paper is to undertake a descriptive systematic review of the effectiveness of critical appraisal skills training for clinicians. Of the 10 controlled studies which examined this issue and were found to meet the eligibility criteria of this review, all used a study population of either medical students or doctors in training. The studies used a variety of different intervention 'dosages' and reported a range of outcomes. These included participants' knowledge of epidemiology/biostatistics, their attitudes towards medical literature, their ability to appraise medical literature, and medical literature reading behaviour. An overall improvement in assessed outcomes of 68% was reported after critical appraisal skills training, particularly in knowledge relating to epidemiology and biostatistics. This review appears to provide some evidence of the benefit of teaching critical appraisal skills to clinicians, in terms of both knowledge of methodological/statistical issues in clinical research and attitudes to medical literature. However, these findings should be considered with caution as the methodological quality of studies was generally poor, with only one study employing a randomized controlled design. There is a need for educators within the field of evidence-based health to consider the implications of this review.
DOI: 10.1136/bmj.318.7194.1322
1999
Cited 137 times
Reporting of precision of estimates for diagnostic accuracy: a review
Diagnostic accuracy is usually characterised by the sensitivity and specificity of a test, and these indices are most commonly presented when evaluations of diagnostic tests are reported. It is important to emphasise that, as in other empirical studies, specific values of diagnostic accuracy are merely estimates. Therefore, when evaluations of diagnostic accuracy are reported the precision of the sensitivity and specificity estimates or likelihood ratios should be stated.1–3 If sensitivity and specificity estimates are reported without a measure of precision, clinicians cannot know the range within which the true values of the indices are likely to lie. Confidence intervals are widely used in medical literature, and journals usually require confidence intervals to be specified for other descriptive estimates and for epidemiological or experimental analytical comparisons. Journals seem less vigilant, however, for evaluations of diagnostic accuracy. For example, a recent review of compliance with methodological standards in diagnostic test research …
DOI: 10.1046/j.1365-2141.1999.01229.x
1999
Cited 126 times
Infections in adults undergoing unrelated donor bone marrow transplantation
This study retrospectively reviews infections over a 7‐year period in 60 consecutive adults (median age 25 years) undergoing their first unrelated donor bone marrow transplant (UD‐BMT). T‐cell depletion was employed in 93%. More than half the patients had one or more severe, potentially life‐threatening, infections. There was a high incidence of invasive fungal infections ( Aspergillus 17, Candida four), despite the use of itraconazole or amphotericin prophylaxis. Ten Aspergillus infections occurred beyond 100 d. Two patients (11%) with invasive aspergillosis survived. Clustering of infections was noted, with invasive fungal infections significantly associated with bacteraemias (OR 3.73, P = 0.06) and multiple viral infections (OR 4.25, P = 0.05). There were 21 severe viral infections in 16 patients, with CMV disease occurring in four patients only; viral pneumonitis was predominantly due to ‘community respiratory’ viruses. Most early bacteraemias (68%) were due to Gram‐positive organisms. The majority of episodes of Gram‐negative sepsis were caused by non‐fastidious non‐fermentative bacteria, such as Pseudomonas spp. and Acinetobacter spp., historically regarded as organisms of low pathogenicity. In patients with successful engraftment and minimal graft‐versus‐host disease, late infections suggestive of continued immune dysfunction (shingles, recurrent lower respiratory infections, Salmonella enteritis and extensive warts) were common.
DOI: 10.1097/01.mlg.0000230399.24306.50
2006
Cited 121 times
Complications of Surgery for Nasal Polyposis and Chronic Rhinosinusitis: The Results of a National Audit in England and Wales
The objective of this study was to determine the rate of complications of surgery for nasal polyposis and chronic rhinosinusitis as well as their risk factors. STUDY DESIGN, SETTING, PARTICIPANTS, AND OUTCOME MEASURES: The authors conducted a prospective study of 3,128 patients who underwent sinonasal surgery during 2000 and 2001 in 87 National Health Service hospitals in England and Wales. Patients completed a preoperative questionnaire that included the Sino-Nasal Outcome Test, a measure of sinonasal symptoms severity and health-related quality of life. Surgeons provided information about polyp extent, opacity of the sinuses on computed tomography (Lund-Mackay score), comorbidity (American Society of Anesthesiologists score), and the occurrence of perioperative complications.Major complications (orbital or intracranial complications, bleeding requiring ligation or orbital decompression, or return to the operating room) occurred in 11 patients (0.4%). Minor complications (all other untoward events) occurred in 207 patients (6.6%). Most frequently reported minor complications were excessive perioperative hemorrhage bleeding (5.0%) as well as postoperative hemorrhage requiring treatment (0.8%). Multivariate analysis indicated that the complication rate was linked to the extent of disease measured in terms of symptom severity and health-related quality of life, the extent of polyposis, level of opacity of the sinuses on computed tomography, and the presence of comorbidity, but not surgical characteristics (extent of surgery, use of endoscope or microdebrider, grade of surgeon, and adjunctive turbinate surgery).The risk of complications depended on patient characteristics rather than on the surgical technique used. Measures of the extent of disease and comorbidity may help in identifying patients at high risk of complications.
DOI: 10.1136/bmj.39371.524271.55
2007
Cited 120 times
Implications of prognostic pessimism in patients with chronic obstructive pulmonary disease (COPD) or asthma admitted to intensive care in the UK within the COPD and asthma outcome study (CAOS): multicentre observational cohort study
To determine whether clinicians' prognoses in patients with severe acute exacerbations of obstructive lung disease admitted to intensive care match observed outcomes in terms of survival.Prospective cohort study.92 intensive care units and three respiratory high dependency units in the United Kingdom.832 patients aged 45 years and older with breathlessness, respiratory failure, or change in mental status because of an exacerbation of COPD, asthma, or a combination of the two.Outcome predicted by clinicians. Observed survival at 180 days.517 patients (62%) survived to 180 days. Clinicians' prognoses were pessimistic, with a mean predicted survival of 49% at 180 days. For the fifth of patients with the poorest prognosis according to the clinician, the predicted survival rate was 10% and the actual rate was 40%. Information from a database covering 74% of intensive care units in the UK suggested no material difference between units that participated and those that did not. Patients recruited were similar to those not recruited in the same units.Because decisions on whether to admit patients with COPD or asthma to intensive care for intubation depend on clinicians' prognoses, some patients who might otherwise survive are probably being denied admission because of unwarranted prognostic pessimism.
DOI: 10.1097/hco.0b013e328310fc95
2008
Cited 109 times
Increased mortality, morbidity, and cost associated with red blood cell transfusion after cardiac surgery
Purpose of review Literature since 2006 was reviewed to identify the harms and costs of red blood cell (RBC) transfusion. Recent findings Several studies, in people having various cardiac surgery operations, found strong associations of RBC transfusion with mortality and postoperative morbidity. The effect on mortality was strongest close to the time of operation but extended to 5 years. Morbidity outcomes included serious wound and systemic infections, renal failure, prolonged ventilation, low cardiac index, myocardial infarction, and stroke. RBC transfusion was also strongly associated with increased cardiac intensive care unit and ward postoperative stay, and hence, increased cost of admission; available studies did not consider all resources used and the associated costs. Summary The harms of RBC transfusion have potentially serious and long-term consequences for patients and are costly for health services. This evidence should shift clinicians' equipoise towards more restrictive transfusion practice. The immediate aim should be to avoid transfusing a small number of RBC units for general malaise attributed to anaemia, a practice that appears to occur in about 50% of transfused patients. Randomized trials comparing restrictive and liberal transfusion triggers are urgently needed to directly compare the benefits and harms from RBC transfusion.
DOI: 10.1002/14651858.cd003091.pub2
2011
Cited 80 times
Dressings for the prevention of surgical site infection
Surgical wounds (incisions) heal by primary intention when the wound edges are brought together and secured - often with sutures, staples, clips or glue. Wound dressings, usually applied after wound closure, provide physical support, protection from bacterial contamination and absorb exudate. Surgical site infection (SSI) is a common complication of surgical wounds that may delay healing.To evaluate the effects of wound dressings for preventing SSI in people with surgical wounds healing by primary intention.We searched the Cochrane Wounds Group Specialised Register (searched 10 May 2011); The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011 Issue 2); Ovid MEDLINE (1950 to April Week 4 2011); Ovid MEDLINE (In-Process & Other Non-Indexed Citations, May 9, 2011); Ovid EMBASE (1980 to 2011 Week 18); EBSCO CINAHL (1982 to 6 May 2011). There were no restrictions based on language or date of publication.Randomised controlled trials (RCTs) comparing alternative wound dressings or wound dressings with leaving wounds exposed for postoperative management of surgical wounds healing by primary intention.Two review authors performed study selection, risk of bias assessment and data extraction independently.Sixteen RCTs were included (2578 participants). All trials were at unclear or high risk of bias. Nine trials included people with wounds resulting from surgical procedures with a contamination classification of 'clean', two trials included people with wounds resulting from surgical procedures with a 'clean/contaminated' contamination classification and the remaining trials evaluated people with wounds resulting from various surgical procedures with different contamination classifications. Two trials compared wound dressings with leaving wounds exposed. The remaining 14 trials compared two alternative dressing types. No evidence was identified to suggest that any dressing significantly reduced the risk of developing an SSI compared with leaving wounds exposed or compared with alternative dressings in people who had surgical wounds healing by secondary intention.At present, there is no evidence to suggest that covering surgical wounds healing by primary intention with wound dressings reduces the risk of SSI or that any particular wound dressing is more effective than others in reducing the rates of SSI, improving scarring, pain control, patient acceptability or ease of dressing removal. Most trials in this review were small and of poor quality at high or unclear risk of bias. However, based on the current evidence, we conclude that decisions on wound dressing should be based on dressing costs and the symptom management properties offered by each dressing type e.g. exudate management.
DOI: 10.1001/jamaophthalmol.2024.0918
2024
Home-Monitoring Vision Tests to Detect Active Neovascular Age-Related Macular Degeneration
Importance Most neovascular age-related macular degeneration (nAMD) treatments involve long-term follow-up of disease activity. Home-monitoring would reduce the burden on patients and their caregivers and release clinic capacity. Objective To evaluate 3 vision home-monitoring tests for patients to use to detect active nAMD compared with diagnosing active nAMD at hospital follow-up during the after-treatment monitoring phase. Design, Setting, and Participants This was a diagnostic test accuracy study wherein the reference standard was detection of active nAMD by an ophthalmologist at hospital follow-up. The 3 home-monitoring tests evaluated included the following: (1) the KeepSight Journal (KSJ [International Macular and Retinal Foundation]), which contains paper-based near-vision tests presented as word puzzles, (2) the MyVisionTrack (mVT [Genentech]) vision-monitoring mobile app, viewed on an Apple mobile operating system–based device, and (3) the MultiBit (MBT [Visumetrics]) app, viewed on an Apple mobile operating system–based device. Participants were asked to test weekly; mVT and MBT scores were transmitted automatically, and KSJ scores were returned to the research office every 6 months. Raw scores between hospital follow-ups were summarized as averages. Patients were recruited from 6 UK hospital eye clinics and were 50 years and older with at least 1 eye first treated for active nAMD for at least 6 months or longer to a maximum of 42 months before approach. Participants were stratified by time since starting treatment. Study data were analyzed from May to September 2021. Exposures The KSJ, mVT, and MBT were compared with the reference standard (in-hospital ophthalmologist examination). Main Outcomes and Measures Estimated area under receiver operating characteristic curve (AUROC). The study had 90% power to detect a difference of 0.06, or 80% power to detect a difference of 0.05, if the AUROC for 2 tests was 0.75. Results A total of 297 patients (mean [SD] age, 74.9 [6.6] years; 174 female [58.6%]) were included in the study. At least 1 hospital follow-up was available for 312 study eyes in 259 participants (1549 complete visits). Median (IQR) home-monitoring testing frequency was 3 (1-4) times per month. Estimated AUROC was less than 0.6 for all home-monitoring tests, and only the KSJ summary score was associated with lesion activity (odds ratio, 3.48; 95% CI, 1.09-11.13; P = .04). Conclusions and Relevance Results suggest that no home-monitoring vision test evaluated provided satisfactory diagnostic accuracy to identify active nAMD diagnosed in hospital eye service follow-up clinics. Implementing any of these evaluated tests, with ophthalmologists only reviewing test positives, would mean most active lesions were missed, risking unnecessary sight loss.
DOI: 10.1054/arth.2002.29389
2002
Cited 109 times
Mortality, morbidity, and 1-year outcomes of primary elective total hip arthroplasty
No representative data exist on the risks of adverse outcomes of total hip arthroplasty (THA) in the United Kingdom. A prospective observational study of unselected THA operations was carried out in 5 U.K. regions. Adverse outcomes were assessed from the hospital case notes and general practitioners of 1,100 randomly selected patients and from 7,151 patient-completed questionnaires 3 and 12 months after THA. Three-month mortality was 0.4% to 0.7%. Dislocation and thromboembolic complications were about 3% and 4%. Perioperative fracture, sciatic nerve palsy, aseptic loosening, and revision each had a risk of ≤1%. At 1 year, 2.6% of patients had undergone another operation on the same hip, 11% reported moderate or severe pain in the operated hip, 23% had severe walking restriction, and 11% were dissatisfied with the operation. Patients and surgeons in the United Kingdom should have access to this information when making a decision about THA.
DOI: 10.1258/1355819021927638
2002
Cited 96 times
Does waiting for total hip replacement matter? Prospective cohort study
To assess the impact on the outcome of total hip replacement of the length of timing spent waiting for surgery.One hundred and forty-three orthopaedic and general hospitals provided information about aspects of surgical practice for each total hip replacement conducted between September 1996 and October 1997 for publicly and privately funded operations in five English health regions. These data were linked to patient information about hip-related pain and disability status (measured using the Oxford Hip Score) before operation and at 3 and 12 months after. Data were analysed using multiple regression analysis.Questionnaires were completed by surgeons for 10,410 (78%) patients treated during the recruitment period and by 7151 (54%) patients. Twelve months after total hip replacement, the majority of patients experienced substantial improvements in hip-related pain and disability (as measured by the Oxford Hip Score). Those patients who started with a worse Oxford Hip Score before the operation tended to remain worse after the operation. Worse pre-operative score was associated with an increased length of either outpatient or inpatient wait, and this trend remained after the operation. The relationship between waiting time and outcome remained after adjustment for possible confounding variables. A consistently worse score was observed in public compared with private patients at all three time-points. In addition, in both sectors, those patients who were socially disadvantaged had a worse score than more socially advantaged patients both before and after the operation.Waiting for surgery is associated with worse outcomes 12 months later. Longer-term outcome needs to be considered to see if this association persists.
DOI: 10.1186/1472-6920-4-30
2004
Cited 95 times
Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]
Critical appraisal skills are believed to play a central role in an evidence-based approach to health practice. The aim of this study was to evaluate the effectiveness and costs of a critical appraisal skills educational intervention aimed at health care professionals. This prospective controlled trial randomized 145 self-selected general practitioners, hospital physicians, professions allied to medicine, and healthcare managers/administrators from the South West of England to a half-day critical appraisal skills training workshop (based on the model of problem-based small group learning) or waiting list control. The following outcomes were assessed at 6-months follow up: knowledge of the principles necessary for appraising evidence; attitudes towards the use of evidence about healthcare; evidence seeking behaviour; perceived confidence in appraising evidence; and ability to critically appraise a systematic review article. At follow up overall knowledge score [mean difference: 2.6 (95% CI: 0.6 to 4.6)] and ability to appraise the results of a systematic review [mean difference: 1.2 (95% CI: 0.01 to 2.4)] were higher in the critical skills training group compared to control. No statistical significant differences in overall attitude towards evidence, evidence seeking behaviour, perceived confidence, and other areas of critical appraisal skills ability (methodology or generalizability) were observed between groups. Taking into account the workshop provision costs and costs of participants time and expenses of participants, the average cost of providing the critical appraisal workshops was approximately £250 per person. The findings of this study challenge the policy of funding 'one-off' educational interventions aimed at enhancing the evidence-based practice of health care professionals. Future evaluations of evidence-based practice interventions need to take in account this trial's negative findings and methodological difficulties.
DOI: 10.1097/00006324-199109000-00010
1991
Cited 92 times
Vistech VCTS 6500 Charts???Within- and Between-Session Reliability
The aim of the study was to measure the reliability of the Vistech VCTS 6500 charts, in test score units, in order to allow clinicians to derive estimates of what constitutes a clinically meaningful change in performance over time. The reliability of a more familiar test, Bailey-Lovie high contrast visual acuity, was also measured to provide a comparison. Patients with normal vision and with early or subtle eye disease were recruited so that the results would be representative of the population likely to present for primary vision screening. Patients were tested on all three VCTS charts on two separate occasions at least 3 weeks apart to give estimates of within- and between-session reliability. Reliability was found to be low in all circumstances; between-session reliability could be improved by using the mean score for the three charts, but the 95% range of difference scores still encompassed at least one-half of the total performance range of the test. It was concluded that Vistech charts are unlikely to be of use for clinical measurements or for research studies.
DOI: 10.1136/bmj.39195.598461.551
2007
Cited 75 times
Implementing the NHS information technology programme: qualitative study of progress in acute trusts
To describe progress and perceived challenges in implementing the NHS information and technology (IT) programme in England.Case studies and in-depth interviews, with themes identified using a framework developed from grounded theory. We interviewed personnel who had been interviewed 18 months earlier, or new personnel in the same posts.Four NHS acute hospital trusts in England.Senior trust managers and clinicians, including chief executives, directors of IT, medical directors, and directors of nursing.Interviewees unreservedly supported the goals of the programme but had several serious concerns. As before, implementation is hampered by local financial deficits, delays in implementing patient administration systems that are compliant with the programme, and poor communication between Connecting for Health (the agency responsible for the programme) and local managers. New issues were raised. Local managers cannot prioritise implementing the programme because of competing financial priorities and uncertainties about the programme. They perceive a growing risk to patients' safety associated with delays and a loss of integration of components of the programme, and are discontented with Choose and Book (electronic booking for referrals from primary care).We recommend that the programme sets realistic timetables for individual trusts and advises managers about interim IT systems they have to purchase because of delays outside their control. Advice needs to be mindful of the need for trusts to ensure longer term compatibility with the programme and value for money. Trusts need assistance in prioritising modernisation of IT by, for example, including implementation of the programme in the performance management framework. Even with Connecting for Health adopting a different approach of setting central standards with local implementation, these issues will still need to be addressed. Lessons learnt in the NHS have wider relevance as healthcare systems, such as in France and Australia, look to realise the potential of large scale IT modernisation.
DOI: 10.1016/j.jtcvs.2012.04.020
2013
Cited 52 times
An open randomized controlled trial of median sternotomy versus anterolateral left thoracotomy on morbidity and health care resource use in patients having off-pump coronary artery bypass surgery: The Sternotomy Versus Thoracotomy (STET) trial
<h3>Objective</h3> Our objective was to compare off-pump coronary artery bypass surgery carried out via a left anterolateral thoracotomy (ThoraCAB) or via a conventional median sternotomy (OPCAB). <h3>Background</h3> Recent advances in minimally invasive cardiac surgery have extended the technique to allow complete surgical revascularization on the beating heart via thoracotomy. <h3>Methods</h3> Patients undergoing nonemergency primary surgery were enrolled between February 2007 and September 2009 at 2 centers. The primary outcome was the time from surgery to fitness for hospital discharge as defined by objective criteria. <h3>Results</h3> A total of 93 patients were randomized to off-pump coronary artery bypass surgery via a median sternotomy (OPCAB) and 91 to off-pump coronary artery bypass surgery via a left anterolateral thoracotomy (ThoraCAB). The surgery was longer for patients in the ThoraCAB group (median, 4.1 vs 3.3 hours) and there were fewer with more than 3 grafts (2% vs 17%). The median time from surgery to fitness for discharge was 6 days (interquartile range, 4-7) in the ThoraCAB group versus 5 days (interquartile range, 4-7) in the OPCAB group (<i>P</i> = .53). The intubation time was shorter, by on average 65 minutes, in the ThoraCAB group (<i>P</i> = .017), although the time in intensive care was similar (<i>P</i> = .91). Pain scores were similar (<i>P</i> = .97), but more analgesia was required in the ThoraCAB group (median duration, 38.8 vs 35.5 hours, <i>P</i> < .001; tramadol use, 66% vs 49%, <i>P</i> = .024). ThoraCAB was associated with significantly worse lung function at discharge (average difference, −0.25 L, <i>P</i> = .01) but quality of life scores at 3 and 12 months were similar (<i>P</i> = .52). The average total cost was 10% higher with ThoraCAB (<i>P</i> = .007). <h3>Conclusions</h3> ThoraCAB resulted in no overall clinical benefit relative to OPCAB.
DOI: 10.1046/j.1365-2923.2001.00916.x
2001
Cited 87 times
Development and validation of a questionnaire to evaluate the effectiveness of evidence-based practice teaching
The aim of this study was to develop and validate a questionnaire to evaluate the effectiveness of evidence-based practice (EBP) teaching.The 152 questionnaires completed by health care professionals with a range of EBP experience were used in this study. Cronbach's alpha for the knowledge and attitude questions indicated a satisfactory level of internal consistency (i.e. >0.60).The discriminative validity was evidenced by a statistically significant difference in the knowledge and attitude scores of 'novices' (i.e. little or no prior EBP education) compared with 'experts' (i.e. health care professionals and academics currently teaching EBP). Moderate to good (> or =0.4) sensitivity index scores were observed for both knowledge and attitude scores as the result of comparing individuals before and after an EBP intervention.The results of this validation study indicate that the developed questionnaire is a satisfactory tool with which to evaluate the effectiveness of EBP teaching interventions.
DOI: 10.1161/01.cir.94.7.1741
1996
Cited 84 times
Influence of External Stent Size on Early Medial and Neointimal Thickening in a Pig Model of Saphenous Vein Bypass Grafting
Background Late saphenous vein graft failure results from intimal and medial thickening due to migration and proliferation of vascular smooth muscle cells and superimposed atheroma. These changes may represent an adaptation by the vein to its insertion into the arterial system. Using a porcine model of arteriovenous bypass grafting, we recently demonstrated that supporting the graft with a nonrestrictive external Dacron velour stent significantly reduced intimal hyperplasia and total wall thickness. In the present study, we investigated the influence of different external stent sizes on graft wall dimensions and cell proliferation. Methods and Results Three stent sizes were tested: mildly restrictive, nonrestrictive, and oversized (5, 6, and 8 mm in diameter, respectively). Four weeks after grafting, total wall thickness was decreased 40% by 5-mm stents ( P =.02), 66% by 6-mm stents ( P =.0004), and 81% by 8-mm stents ( P =.02 versus unstented grafts). Neointimal thickness was reduced almost 62% by 6-mm and 72% by 8-mm stents (both P =.01) but not by 5-mm stents. As a result, the encroachment of the intima into the lumen was reduced ≈70% by 6- or 8-mm stents ( P =.02 and P =.01 versus unstented grafts, respectively). Both neointimal and medial cell proliferation were significantly reduced by all three stents compared with unstented grafts. Conclusions External stenting of saphenous vein bypass grafts reduces early intimal and medial hyperplasia. Oversized stents give equally profound suppression of intimal thickening, obviating the need for precise size matching with the graft and greatly simplifying the surgical procedure.
DOI: 10.1016/s0022-5223(02)73254-6
2003
Cited 81 times
Radial versus right internal thoracic artery as a second arterial conduit for coronary surgery: early and midterm outcomes
We sought to compare early and midterm clinical outcomes in patients receiving a right internal thoracic artery or a radial artery as the second arterial conduit for myocardial revascularization.Data prospectively collected for all patients who underwent coronary artery bypass surgery between April 1996 and May 2001 and who received both a left internal thoracic artery graft and either a right internal thoracic artery (n = 336) or a radial artery graft (n = 325) were analyzed. Patients in the radial artery group were older, with a greater body mass index, poorer ejection fraction, greater prevalence of diabetes, and higher New York Heart Association class than those in the right internal thoracic artery group.Odds ratios for perioperative myocardial infarction, atrial fibrillation, postoperative transfusion, and intensive care unit stay all showed a statistically significant benefit in the radial artery group compared with results in the right internal thoracic artery group (P <or=.05). Survival estimates at 18 months for patients who received right internal thoracic artery and radial artery grafts were 98.4% and 99.7%, respectively (hazard ratio, 0.25; 95% confidence interval, 0.06-1.10; P =.07). Estimates for survival free from any cardiac-related event or death in the right internal thoracic artery and radial artery groups were 92.3% and 97.8%, respectively (hazard ratio, 0.37; 95% confidence interval, 0.16-0.84; P =.02). A multivariate Cox regression model showed a stronger protective effect of a radial artery graft (hazard ratio, 0.25; 95% confidence interval, 0.12-0.51; P <.0001).Early and midterm outcomes of myocardial revascularization with 2 arterial grafts are better if the radial artery is used for the second graft rather than the right internal thoracic artery, assuming that the left internal thoracic artery is used for the first arterial graft.
DOI: 10.1136/bjo.77.4.228
1993
Cited 75 times
Ocular and vision defects in preschool children.
Ocular and/or vision defects are one of the commonest reasons for the referral of young children to hospital. In a survey of a birth cohort in one health district, 7.1% of children were diagnosed as having such defects by their fifth birthday; 2.1% were detected before the age of 2 years, and 5.1% between 2 and 5 years. Up to the age of 2 years, low birthweight children and those who require postnatal special care had a higher risk of having an ocular or vision defect diagnosed and were more likely to have serious visual impairment than other children. In contrast, between the ages of 2 and 5 years of age these high risk children showed no continuing increased risk of having a defect diagnosed, nor did they show any differences in the severity or type of vision defects compared with other children. Averaged over the years studied, the incidence of defects presenting to specialist eye clinics among all 2-5 year olds was 1.7%, higher than the 1.1% found for 0-2 year olds. This increase consisted primarily of children with refractive errors only.
DOI: 10.1016/s0003-4975(02)03550-6
2002
Cited 73 times
Evaluation of the effectiveness of off-pump coronary artery bypass grafting in high-risk patients: an observational study
Coronary artery bypass grafting in high-risk patients carries substantial morbidity. We compared the effectiveness of off-pump revascularization with that of conventional coronary artery bypass grafting using cardiopulmonary bypass and cardioplegic arrest in consecutive high-risk patients.From April 1996 to December 2000, clinical data for consecutive patients undergoing coronary artery revascularization were prospectively entered into a database. Data were extracted for all patients considered to be high risk, defined as the presence of one or more of ten adverse prognostic factors. Hospital mortality and early morbidity were compared between two groups of patients, the on-pump and off-pump groups.The study group comprised 1,570 consecutive high-risk patients, 332 (21.1%) of whom underwent an off-pump operation. Patients in the on-pump group had fewer high-risk factors and lower Parsonnet scores and were less likely to be 75 years of age or older, to have peripheral vascular disease or hypercholesterolemia, or to have sustained a previous transient ischemic attack. However, they were more likely to be assigned to a higher Canadian Cardiovascular Society class and had more extensive coronary artery disease and were more likely to have unstable angina, to require urgent or emergency operations, and to receive more grafts than those undergoing off-pump procedures. Unadjusted odds ratios for intensive care unit or high-dependency unit stay, total length of stay, blood loss of more than 1,000 mL, postoperative hemoglobin and transfusion requirement all showed a highly significant benefit for the off-pump group (p < or = 0.005; odds ratios, 0.33 to 0.65). After adjustment for prognostic variables, odds ratios remained essentially unaltered (adjusted odds ratio estimates 0.36 to p < 0.05) except for blood loss of more than 1,000 mL (adjusted odds ratio estimate, 0.82; p = 0.22). Sensitivity analyses confirmed the robustness of these findings.Off-pump coronary artery bypass grafting is safe, effective, and associated with reduced morbidity in high-risk patients compared with conventional coronary artery revascularization.
DOI: 10.3310/hta8160
2004
Cited 72 times
A multi-centre randomised controlled trial of minimally invasive direct coronary bypass grafting versus percutaneous transluminal coronary angioplasty with stenting for proximal stenosis of the left anterior descending coronary artery
To compare the clinical- and cost-effectiveness of minimally invasive direct coronary artery bypass grafting (MIDCAB) and percutaneous transluminal coronary angioplasty (PTCA) with or without stenting in patients with single-vessel disease of the left anterior descending coronary artery (LAD).Multi-centre randomised trial without blinding. The computer-generated sequence of randomised assignments was stratified by centre, allocated participants in blocks and was concealed using a centralised telephone facility.Four tertiary cardiothoracic surgery centres in England.Patients with ischaemic heart disease with at least 50% proximal stenosis of the LAD, suitable for either PTCA or MIDCAB, and with no significant disease in another vessel.Patients randomised to PTCA had local anaesthetic and underwent PTCA according to the method preferred by the operator carrying out the procedure. Patients randomised to MIDCAB had general anaesthetic. The chest was opened through an 8-10-cm left anterior thoracotomy. The ribs were retracted and the left internal thoracic artery (LITA) harvested. The pericardium was opened in the line of the LAD to confirm the feasibility of operation. The distal LITA was anastomosed end-to-side to an arteriotomy in the LAD. All operators were experienced in carrying out MIDCAB.The primary outcome measure was survival free from cardiac-related events. Relevant events were death, myocardial infarction, repeat coronary revascularisation and recurrence of symptomatic angina or clinical signs of ischaemia during an exercise tolerance test at annual follow-up. Secondary outcome measures were complications, functional outcome, disease-specific and generic quality of life, health and social services resource use and their costs.A total of 12,828 consecutive patients undergoing an angiogram were logged at participating centres from November 1999 to December 2001. Of the 1091 patients with proximal stenosis of the LAD, 127 were eligible and consented to take part; 100 were randomised and the remaining 27 consented to follow-up. All randomised participants were included in an intention-to-treat analysis of survival free from cardiac-related events, which found a non-significant benefit from MIDCAB. Cumulative hazard rates at 12 months were estimated to be 7.1 and 9.2% for MIDCAB and PTCA, respectively. There were no important differences between MIDCAB and PTCA with respect to angina symptoms or disease-specific or generic quality of life. The total NHS procedure costs were 1648 British pounds and 946 British pounds for MIDCAB and PTCA, respectively. The costs of resources used during 1 year of follow-up were 1033 British pounds and 843 British pounds, respectively.The study found no evidence that MIDCAB was more effective than PTCA. The procedure costs of MIDCAB were observed to be considerably higher than those of PTCA. Given these findings, it is unlikely that MIDCAB represents a cost-effective use of resources in the reference population. Recent advances in cardiac surgery mean that surgeons now tend to carry out off-pump bypass grafting via a sternotomy instead of MIDCAB. At the same time, cardiologists are treating more patients with multi-vessel disease by PTCA. Future primary research should focus on this comparison. Other small trials of PTCA versus MIDCAB have now finished and a more conclusive answer to the original objective could be provided by a systematic review.
DOI: 10.1111/j.1475-1313.1993.tb00421.x
1993
Cited 68 times
Reliability of high‐ and low‐contrast letter charts
The aim of this study was to measure the reliability, in test score units, of several clinical tests which use high- and low-contrast letters, and to provide an estimate of what constitutes a significant change in performance over time. Patients with normal vision and with early or subtle eye disease were recruited so that the results would be representative of the population likely to present for primary vision screening. Patients were tested on the Bailey-Lovie logMAR chart, the Regan low-contrast letter charts and the Pelli-Robson low-contrast letter chart on two occasions; the two test sessions were separated by at least four weeks to give an estimate of reliability appropriate for the conditions under which the tests are likely to be used. A 'significant change', i.e. one which would be observed in only about 5% of patients with stable visual performance, was about +/- 2 'steps' of the measurement scale, i.e. +/- 2 lines for the Bailey-Lovie and Regan charts and +/- 2 letter groups for the Pelli-Robson chart.
DOI: 10.1016/j.athoracsur.2005.06.038
2006
Cited 65 times
Splanchnic Organ Injury During Coronary Surgery With or Without Cardiopulmonary Bypass: A Randomized, Controlled Trial
We investigated the efficacy of coronary surgery with or without cardiopulmonary bypass in protecting the function of the small intestine, liver, and pancreas.Patients were randomized to off-pump coronary artery bypass grafting (OPCAB) or coronary artery bypass grafting with cardiopulmonary bypass (CABG-CPB). Small intestine function was assessed by differential four sugars (O = methyl-D-glucose, D-xylose, L-rhamnose, and lactulose) permeability and absorption tests. Liver function was assessed by monoethylglycinexylidide/lidocaine ratios and by serial measurements of transaminases (aspartate transaminase and alanine-amino transferase), bilirubin, and alkaline phosphatase. Pancreatic function was assessed by serial measurements of insulin/glucagon ratio, amylase, and glucose. Forty patients were recruited (20 per group).Permeability and absorption were more impaired in the OPCAB group immediately after surgery, but returned to baseline levels in both groups by postoperative day 5 (interaction of surgery type and time; p = 0.05 and p = 0.02, respectively). Monoethylglycinexylidide/lidocaine ratios were not different in the two groups. Aspartate transaminase and alanine-amino transferase levels were higher in the CABG-CPB group for the first postoperative day, but levels converged by day 3 (interaction of surgery type and time; p < 0.0001 and p = 0.04, respectively). The bilirubin level for the OPCAB group overshot the CABG-CPB group at 36 hours before returning to a similar level 60 hours postoperatively. Amylase levels were higher in the CABG-CPB group than in the OPCAB group (1.17 times; p = 0.03); other markers of pancreatic function showed no differences between the groups.Early small intestine function is worse with OPCAB; all functions recover to similar levels in both groups by day 5. Conversely, pancreatic function is worse with the CABG-CPB group than with the OPCAB group. Hepatic metabolic function does not differ by type of surgery to the end of the operation. Postoperative hepatocellular injury was worse with the CABG-CPB group.
DOI: 10.1161/circulationaha.105.557462
2005
Cited 64 times
Retinal and Cerebral Microembolization During Coronary Artery Bypass Surgery
Background— We sought to compare the effects on ophthalmic function of coronary artery bypass grafting (CABG) with cardiopulmonary bypass (CPB) and off-pump (OPCAB) grafting and to investigate whether retinal microvascular damage is associated with markers of cerebral injury. Methods and Results— Retinal microvascular damage was assessed by fluorescein angiography and color fundus photography. Ophthalmic function was tested by the logarithm of the minimum angle of resolution visual acuity (VA), and cerebral injury, by transcranial Doppler ultrasound–detected emboli and S100 protein values. Twenty patients were randomized. Fluorescein angiography and postoperative VA could not be obtained for 1 CABG-CPB patient. Retinal microvascular damage was detected in 5 of 9 CABG-CPB but in none of 10 OPCAB patients (risk difference, 55%; 95% confidence interval [CI], 23% to 88%; P =0.01). Color fundus photography detected microvascular damage in 1 CABG-CPB patient but in no OPCAB patients; this lesion was associated with a field defect, which remained after 3 months of follow-up. There was no difference in postoperative VA. Doppler high-intensity transient signals (HITS) were 20.3 times more frequent in the CABG-CPB than in the OPCAB group (95% CI, 9.1 to 45; P &lt;0.0001). Protein S100 levels were higher in the CABG-CPB than in the OPCAB group 1 hour after surgery ( P &lt;0.001). HITS were 14.7 times more frequent (95% CI, 3.5 to 62; P =0.001) and S100 level 2.1 times higher (95% CI, 1.3 to 3.5; P =0.005) when retinal microvascular damage was present. Conclusions— The relative frequency of retinal microvascular damage between groups shows the extent to which the risk of cerebral injury is reduced with OPCAB. Imaging of part of the cerebral circulation provides evidence to validate markers of cerebral injury.
DOI: 10.1161/strokeaha.108.517805
2009
Cited 60 times
Variation in Outcome After Subarachnoid Hemorrhage
Backgrounds and Purpose— The purpose of the study was to describe the characteristics, management, and outcomes of patients with confirmed aneurysmal subarachnoid hemorrhage and to compare outcomes across neurosurgical units (NSUs) in the UK and Ireland. Methods— A cohort of patients admitted to NSUs with subarachnoid hemorrhage between September 14, 2001 and September 13, 2002 was studied longitudinally. Information was collected to characterize clinical condition on admission and treatment. Death or severe disability, defined by the Glasgow Outcome Score–Extended, was ascertained at 6 months. Results— Data for 2397 patients with a confirmed aneurysm and no coexisting neurological pathology were collected by all 34 NSUs in the UK and Ireland. Aneurysm repair was attempted in 2198 (91.7%) patients (surgical clipping, 57.7%; endovascular coiling, 41.2%; other repair, 1.0%). Most patients (65.0%) were admitted to the NSU on the same day or the day after their hemorrhage; 32.0% of treated patients had the aneurysm repaired on the day of admission to the NSU (day 0), day 1 or day 2 and a further 39.3% by day 7. Glasgow Outcome Score–Extended at 6 months was obtained for 90.6% of patients (2172), of whom 38.5% had an unfavorable outcome. The median risk of an unfavorable outcome for all patients was 31% (5 th and 95 th percentiles, 12% and 83%), depending on prerepair prognostic factors. After adjustment for case-mix, the percentage of patients with an unfavorable outcome in each NSU did not differ significantly from the overall mean. Conclusions— In this study that collected representative data from the UK and Ireland, there was no evidence that the performance of any NSU differed from the average.
DOI: 10.1016/j.ejcts.2006.03.018
2006
Cited 60 times
Morbidity and mortality following acute conversion from off-pump to on-pump coronary surgery
Many studies have described reduced morbidity in hospital and equivalent midterm outcomes with off-pump coronary artery bypass (OPCAB) surgery compared to conventional CABG (CABG-CPB). However, OPCAB is sometimes converted acutely to CABG-CPB. We describe the risk of acute conversion and compare patients' outcomes for acutely converted OPCAB with unconverted OPCAB and CABG-CPB.Consecutive acute conversions, i.e. OPCAB patients in whom CPB was instituted urgently for hemodynamic or electrical instability, cardiac arrest or uncontrolled bleeding, were compared with propensity-matched unconverted OPCAB and CABG-CPB patients. Relative risks of death and complications in hospital, and subsequent survival were estimated.The risk of acute conversion between 1996 and 2004 was 1.1% (27/2492): 5.1% in the first 2 years, 2.2% in the third year and 0.8% subsequently. Odds ratios for death in hospital compared to unconverted OPCAB and CABG-CPB were 4.4 (95% confidence interval (CI) 0.67-29.1) and 4.7 (95% CI 1.03-21.1), respectively, and ranged from 0 to 4.5 for serious complications. Converted patients had an increased hazard of death for 3 years after surgery compared to unconverted OPCAB (hazard ratio 3.21, 95% CI 1.20-8.59) and CABG-CPB patients (hazard ratio 3.23, 95% CI 1.41-7.39).Experienced OPCAB surgeons have a low risk of acute conversion. Acutely converted patients have a moderately increased risk of death and serious complications in hospital. These risks are difficult to quantify precisely because conversion is rare.
DOI: 10.1002/jrsm.1074
2013
Cited 41 times
Issues relating to selective reporting when including non‐randomized studies in systematic reviews on the effects of healthcare interventions
DOI: 10.1002/14651858.cd011230
2014
Cited 37 times
Systemic safety of bevacizumab versus ranibizumab for neovascular age-related macular degeneration
This is a protocol for a Cochrane Review (Intervention). The objectives are as follows: To assess the systemic safety of intravitreal bevacizumab compared with intravitreal ranibizumab in people with neovascular AMD.
DOI: 10.1016/j.ijcard.2016.09.021
2016
Cited 36 times
The healthcare costs of heart failure during the last five years of life: A retrospective cohort study
Background Evidence on the economic impact of heart failure (HF) is vital in order to predict the cost-effectiveness of novel interventions. We estimate the health system costs of HF during the last five years of life. Methods We used linked primary care and mortality data accessed through the Clinical Practice Research Datalink (CPRD) to identify 1555 adults in England who died with HF in 2012/13. We used CPRD and linked Hospital Episode Statistics to estimate the cost of medications, primary and hospital healthcare. Using GLS regression we estimated the relationship between costs, HF diagnosis, proximity to death and patient characteristics. Results In the last 3 months of life, healthcare costs were £8827 (95% CI £8357 to £9296) per patient, more than 90% of which were for inpatient or critical care. In the last 3 months, patients spent on average 17.8 (95% CI 16.8 to 18.8) days in hospital and had 8.8 (95% CI 8.4 to 9.1) primary care consultations. Most (931/1555; 59.9%) patients were in hospital on the day of death. Mean quarterly healthcare costs in quarters after HF diagnosis were higher (£1439; [95% CI £1260 to £1619]) than in quarters preceding diagnosis. Older patients and patients with lower comorbidity scores had lower costs. Conclusions Healthcare costs increase sharply at the end of life and are dominated by hospital care. There is potential to save money by implementation and evaluation of interventions that are known to reduce hospitalisations for HF, particularly at the end of life.
DOI: 10.1136/bmjopen-2017-018289
2017
Cited 34 times
A longitudinal study to assess the frequency and cost of antivascular endothelial therapy, and inequalities in access, in England between 2005 and 2015
Objectives High-cost antivascular endothelial growth factor (anti-VEGF) medicines for eye disorders challenge ophthalmologists and policymakers to provide fair access for patients while minimising costs. We describe the growth in the use and costs of these medicines and measure inequalities in access. Design Longitudinal study using Hospital Episode Statistics (2005/2006 to 2014/2015) and hospital prescribing cost reports (2008/2009 to 2015/2016). We used Poisson regression to estimate standardised rates and explore temporal and geographical variations. Setting National Health Service (NHS) care in England. Population Patients receiving anti-VEGF injections for age-related macular degeneration, diabetic macular oedema and other eye disorders. Interventions Higher-cost drugs (ranibizumab or aflibercept) recommended by the National Institute for Health and Care Excellence or lower-cost drug (bevacizumab) not licensed for eye disorders. Main outcome measures National procedure rates and variation between and within clinical commissioning groups (CCGs). Cost of ranibizumab and aflibercept prescribing. Results Injection procedures increased by 215% between 2010/2011 and 2014/2015. In 2014/2015 there were 388 031 procedures (714 per 100 000). There is no evidence that the dramatic growth in rates is slowing down. Since 2010/2011 the estimated cost of ranibizumab and aflibercept increased by 247% to £447 million in 2015/2016, equivalent to the entire annual budget of a CCG. There are large inequalities in access; in 2014/2015 procedure rates in a ‘high use’ CCG were 9.08 times higher than in a ‘low use’ CCG. In the South-West of England there was twofold variation in injections per patient per year (range 2.9 to 5.9). Conclusions The high and rising cost of anti-VEGF therapy affects the ability of the NHS to provide care for other patients. Current regulations encourage the increasing use of ranibizumab and aflibercept rather than bevacizumab, which evidence suggests is more cost-effective. NHS patients in England do not have equal access to the most cost-effective care.
DOI: 10.1186/s13063-021-05966-3
2022
Cited 11 times
Vitrectomy, subretinal Tissue plasminogen activator and Intravitreal Gas for submacular haemorrhage secondary to Exudative Age-Related macular degeneration (TIGER): study protocol for a phase 3, pan-European, two-group, non-commercial, active-control, observer-masked, superiority, randomised controlled surgical trial
Neovascular (wet) age-related macular degeneration (AMD) can be associated with large submacular haemorrhage (SMH). The natural history of SMH is very poor, with typically marked and permanent loss of central vision in the affected eye. Practice surveys indicate varied management approaches including observation, intravitreal anti-vascular endothelial growth factor therapy, intravitreal gas to pneumatically displace SMH, intravitreal alteplase (tissue plasminogen activator, TPA) to dissolve the clot, subretinal TPA via vitrectomy, and varying combinations thereof. No large, published, randomised controlled trials have compared these management options.TIGER is a phase 3, pan-European, two-group, active-control, observer-masked, superiority, randomised controlled surgical trial. Eligible participants have large, fovea-involving SMH of no more than 15 days duration due to treatment-naïve or previously treated neovascular AMD, including idiopathic polypoidal choroidal vasculopathy and retinal angiomatous proliferation. A total of 210 participants are randomised in a 1:1 ratio to pars plana vitrectomy, off-label subretinal TPA up to 25 μg in 0.25 ml, intravitreal 20% sulfahexafluoride gas and intravitreal aflibercept, or intravitreal aflibercept monotherapy. Aflibercept 2 mg is administered to both groups monthly for 3 doses, then 2-monthly to month 12. The primary efficacy outcome is the proportion of participants with best-corrected visual acuity (BCVA) gain of ≥ 10 Early Treatment Diabetic Retinopathy (ETDRS) letters in the study eye at month 12. Secondary efficacy outcomes (at 6 and 12 months unless noted otherwise) are proportion of participants with a BCVA gain of ≥ 10 ETDRS letters at 6 months, mean ETDRS BCVA, Radner maximum reading speed, National Eye Institute 25-item Visual Function Questionnaire composite score, EQ-5D-5L with vision bolt-on score, Short Warwick and Edinburgh Mental Wellbeing score, scotoma size on Humphrey field analyser, and presence/absence of subfoveal fibrosis and/or atrophy and area of fibrosis/atrophy using independent reading centre multimodal image analysis (12 months only). Key safety outcomes are adverse events, serious adverse events, and important medical events, coded using the Medical Dictionary for Regulatory Activities Preferred Terms.The best management of SMH is unknown. TIGER aims to establish if the benefits of SMH surgery outweigh the risks, relative to aflibercept monotherapy.ClinicalTrials.gov NCT04663750 ; EudraCT: 2020-004917-10.
DOI: 10.1016/j.ehj.2003.11.015
2004
Cited 64 times
Beating heart against cardioplegic arrest studies (BHACAS 1 and 2): quality of life at mid-term follow-up in two randomised controlled trials
Aims Off-pump coronary bypass grafting (OPCAB) has short-term benefits compared to conventional bypass grafting using the heart-lung machine (CABG-CPB) but may compromise longer term outcome.We aimed to compare generic and disease specific quality of life (QoL) two to four years after surgery in participants in two randomised controlled trials of OPCAB vs. CABG-CPB.Methods and results Trial participants were sent four questionnaires (SF-36, EuroQol/ EQ5D, Seattle Angina Questionnaire (SAQ) and Coronary Revascularisation Outcome Questionnaire (CROQ)) to assess generic and disease-specific quality of life (QoL).Of 401 participants, 22 (5.5%) had died; of the 379 survivors, 328 responded (86.5%; 159 CABG-CPB and 169 OPCAB).Median duration of follow-up was three years.QoL scores for both groups were very similar and differences between groups were not significant (p > 0:05 for all questionnaires and dimensions).Summary SF-36 scores showed poorer than normal physical QoL but normal mental QoL.Among all responders, there was a tendency for CROQ scores (core total, physical and psychosocial functioning and satisfaction with treatment) to deteriorate with time after the operation (p 6 0:05).Conclusion Two to four years after surgery, patients randomised to OPCAB and CABG-CPB had similar symptoms, generic and disease-specific QoL.
DOI: 10.1016/s0003-4975(02)03727-x
2002
Cited 60 times
Predictors of stroke in the modern era of coronary artery bypass grafting: a case control study
Background. Stroke is a rare but devastating complication after coronary artery bypass grafting (CABG) and its prevention remains elusive. We used a case control design to investigate the extent to which preoperative and perioperative factors were associated with occurrence of stroke in a cohort of consecutive patients undergoing myocardial revascularization.Methods. From April 1996 to March 2001, data from 4,077 patients undergoing CABG were prospectively entered into a database. The association of preoperative and perioperative factors with stroke was investigated by univariate analyses. Factors observed to be significantly associated with stroke in these analyses were further investigated using multiple logistic regression to estimate the strength of the associations with the occurrence of stroke, after taking account of the other factors.Results. During the study period, 4,077 patients underwent CABG and of these 923 (22.6%) had off-pump surgery. Forty-five patients suffered a perioperative stroke (1.1%). Overall there were 46 in-hospital deaths (1.1%), of whom 6 also suffered a stroke. Brain imaging of the stroke patients showed embolic lesions in 58%, watershed in 28%, and mixed in 14%. Multivariate regression analysis identified several preoperative factors as independent predictors of stroke, ie, age, unstable angina, serum creatinine greater than 150 mcg/ml, previous cerebrovascular accident (CVA), peripheral vascular disease (PVD), and salvage operation. When operative risk factors were added to the adjusted model, off-pump surgery was associated with a substantial, but not significant, protective effect against stroke (odds ratio = 0.56, 95% confidence interval 0.20 to 1.55). Survival for stroke patients was 93% and 78% at 1 and 5 years, respectively.Conclusions. Overall incidence of stroke is relatively low in our series. Age, unstable angina, previous CVA, PVD, serum creatinine greater than 150 mcg/ml, and salvage operation are independent predictors of stroke. These factors should be taken into account when informing each individual patient on the possible risk of stroke and in the decision-making process on the surgical strategy.
DOI: 10.1097/01.mlg.0000198338.05826.18
2006
Cited 58 times
Health‐Related Quality of Life after Polypectomy with and without Additional Surgery
The objective of this study was to compare the health-related quality of life of patients undergoing simple polypectomy with that of patients undergoing polypectomy with additional surgery.This was a prospective, multicenter cohort study of adults undergoing sinonasal surgery.Eight hundred forty-four patients received simple polypectomy and 1,004 patients received polypectomy with additional surgery. Health-related quality of life was compared at 12 and 36 months after surgery using the Sino-Nasal Outcome Test (SNOT-22). Total SNOT-22 scores may range from zero to 110 with lower scores representing better outcomes. We used linear regression to adjust postoperative SNOT-22 scores for baseline characteristics. When comparing the difference between the two surgical techniques, positive SNOT-22 scores represent a better outcome for those undergoing additional surgery.There were only small differences between the two groups at 12 months (difference in SNOT-22 -0.5; 95% confidence interval [CI]=-2.3-1.3; P=.58) and 36 months after surgery (difference -2.1; 95% CI=-4.4-0.2; P=.08). The additional surgery group had a slightly higher risk of excessive perioperative bleeding (8.6% vs. 6.0%; P=.04) but a slightly lower risk of revision surgery within 36 months (10.4% vs. 13.3%; P=.12).Nasal polypectomy with additional surgery seems to have no benefit over simple polypectomy in terms of health-related quality of life improvement for patients with nasal polyposis.
DOI: 10.1016/j.jtcvs.2004.02.031
2004
Cited 56 times
Monitoring the performance of residents during training in off-pump coronary surgery
Control charts (eg, cumulative sum charts) plot changes in performance with time and can alert a surgeon to suboptimal performance. They were used to compare performance of off-pump coronary artery bypass surgery between a consultant and four resident surgeons and to compare performance of off-pump coronary artery bypass surgery and conventional coronary artery bypass grafting within surgeons.Data were analyzed for consecutive patients undergoing coronary artery bypass grafting who were operated on by one consultant or one of four residents. Conversions were analyzed by intention to treat. Perioperative death or one or more of 10 adverse events constituted failure. Predicted risks of failure for individual patients were derived from the study population. Variable life-adjusted displays and risk-adjusted sequential probability ratio test charts were plotted.Data for 1372 patients were analyzed; 769 of the procedures were off-pump coronary artery bypass operations (56.0%). The consultant operated on 382 patients (293 off-pump, 76.7%), and the residents operated on 990 (474 off-pump, 47.9%). Patients operated on by residents tended to be older, more obese, more likely to require an urgent operation, and more likely to need a circumflex artery graft but less likely to have triple-vessel disease. There were 7 conversions (consultant 5, residents 2). The overall failure rate was 8.5% (9.2% for consultant's operations and 8.2% for residents' operations), including 10 deaths (0.7%). Predicted and observed risks of failure were similar for all five surgeons. After 100 off-pump coronary artery bypass operations, performance was the same or better for the residents as for the consultant. For all surgeons, performance was the same or better for off-pump as for conventional coronary artery bypass grafting.Off-pump coronary artery bypass surgery can be safely taught to cardiothoracic residents. Implementation of continuous performance monitoring for residents is practicable.
DOI: 10.1016/j.athoracsur.2005.05.077
2005
Cited 52 times
Incomplete Revascularization During OPCAB Surgery is Associated With Reduced Mid-Term Event-Free Survival
The aim of this study was to compare early and mid-term outcome in patients undergoing off-pump coronary artery bypass surgery who have had complete revascularizations and incomplete revascularizations (IRs).Patient and operative data were collected prospectively for all patients who had off-pump coronary artery bypass surgery. Patients with multivessel disease were classified as having IR if the number of diseased coronary systems (left anterior descending coronary artery, circumflex and right coronary artery) exceeded the number of distal anastomoses. In-hospital outcomes, survival, and event-free survival were compared between patients with complete revascularization and IR using propensity scores to take account of differences in prognostic factors.There were 1,479 off-pump coronary artery bypass surgery patients between April 1996 and December 2002 (30% of all coronary artery bypass graft patients), and 16.0% (237 patients) had IRs. Patients with IRs tended to be older and were female, had more extensive disease, worse dyspnea, a higher Parsonnet score, poorer ejection fraction, congestive cardiac failure, asthma or chronic obstructive airways disease, and previous cardiac surgery. The adjusted hazard ratio for patient survival with IRs versus complete revascularizations was 1.56 (95% confidence interval, 1.19 to 2.06; p = 0.001). Analyses for multiple time periods confirmed that IRs had a significantly increased risk of death, but also that the risk disappeared after the first 4 to 6 months of follow-up (p < 0.0001).Compared with off-pump coronary artery bypass surgery patients with complete revascularizations, those with IRs have reduced survival, but only in the first 4 to 6 months after surgery. Patients' preoperative condition, rather than IR itself, may explain these findings because IRs should have mid-term as well as early effects.
DOI: 10.1097/aco.0b013e32830dd087
2008
Cited 48 times
Increased mortality, morbidity, and cost associated with red blood cell transfusion after cardiac surgery
Literature since 2006 was reviewed to identify the harms and costs of red blood cell (RBC) transfusion.Several studies, on people having various cardiac surgery operations, found strong associations of RBC transfusion with mortality and postoperative morbidity. The effect on mortality was strongest close to the time of operation but extended to 5 years. Morbidity outcomes included serious wound and systemic infections, renal failure, prolonged ventilation, low cardiac index, myocardial infarction, and stroke. RBC transfusion was also strongly associated with increased intensive care and ward postoperative stay, and hence increased cost of admission; available studies did not consider all resources used and the associated costs.The harms of RBC transfusion have potentially serious and long-term consequences for patients and are costly for health services. This evidence should shift clinicians' equipoise towards more restrictive transfusion practice. The immediate aim should be to avoid transfusing small numbers of RBC units for general malaise attributed to anaemia, a practice which appears to occur in about 50% of transfused patients. Randomized trials comparing restrictive and liberal transfusion triggers are urgently needed to compare directly the balance of benefits and harms from RBC transfusion.
DOI: 10.1093/qjmed/hcp036
2009
Cited 47 times
Predicting mortality for patients with exacerbations of COPD and Asthma in the COPD and Asthma Outcome Study (CAOS)
Decisions about the intensity of treatment for patients with acute exacerbations of chronic obstructive pulmonary disease (AECOPD) are influenced by predictions about survival and quality of life. Evidence suggests that these predictions are poorly calibrated and tend to be pessimistic.The aim of this study was to develop an outcome prediction model for COPD patients to support treatment decisions.A prospective multi-centre cohort study in Intensive Care Units (ICU) and Respiratory High Dependency Units (RHDU) in the UK recruited patients aged 45 years and older admitted with an exacerbation of obstructive lung disease. Data were collected on patients' characteristics prior to ICU admission, and on their survival and quality of life after 180 days. An outcome prediction model was developed using multivariate logistic regression and bootstrapping.Ninety-two ICUs (53% of those in the UK) and three RHDUs took part. A total of 832 patients were recruited. Cumulative 180-day mortality was 37.9%. Using data available at the time of admission to the units, a prognostic model was developed which had an estimated area under the receiver operating characteristic curve ('c') of 74.7% after bootstrapping that was more discriminating than the clinicians (P = 0.033) and was well calibrated.This study has produced an outcome prediction model with slightly better discrimination and much better calibration than the participating clinicians. It has the potential to support risk adjustment and clinical decision making about admission to intensive care.
DOI: 10.1136/thx.2007.091249
2009
Cited 46 times
Survival and quality of life for patients with COPD or asthma admitted to intensive care in a UK multicentre cohort: the COPD and Asthma Outcome Study (CAOS)
Non-invasive ventilation is first-line treatment for patients with acutely decompensated chronic obstructive pulmonary disease (COPD), but endotracheal intubation, involving admission to an intensive care unit, may sometimes be required. Decisions to admit to an intensive care unit are commonly based on predicted survival and quality of life, but the information base for these decisions is limited and there is some evidence that clinicians tend to be pessimistic. This study examined the outcomes in patients with COPD admitted to the intensive care unit for decompensated type II respiratory failure.A prospective cohort study was carried out in 92 intensive care units and 3 respiratory high dependency units in the UK. Patients aged 45 years and older with breathlessness, respiratory failure or change in mental status due to an exacerbation of COPD, asthma or a combination of the two were recruited. Outcomes included survival and quality of life at 180 days.Of the 832 patients recruited, 517 (62%) survived to 180 days. Of the survivors, 421 (81%) responded to a questionnaire. Of the respondents, 73% considered their quality of life to be the same as or better than it had been in the stable period before they were admitted, and 96% would choose similar treatment again. Function during the stable pre-admission period was a reasonable indicator of function reported by those who survived 180 days.Most patients with COPD who survive to 180 days after treatment in an intensive care unit have a heavy burden of symptoms, but almost all of them-including those who have been intubated-would want similar intensive care again under similar circumstances.
DOI: 10.1136/bmj.a939
2008
Cited 45 times
Implementation of computerised physician order entry (CPOE) and picture archiving and communication systems (PACS) in the NHS: quantitative before and after study
<b>Objective</b> To assess the impact of components of the national programme for information technology (NPfIT) on measures of clinical and operational efficiency. <b>Design</b> Quasi-experimental controlled before and after study using routinely collected patient level data. <b>Setting</b> Four NHS acute hospital trusts in England. <b>Data sources</b> Inpatient admissions and outpatient appointments, 2000-5. <b>Interventions</b> A system for ordering pathology tests and browsing results (computerised physician order entry, CPOE) and a system for requesting radiological examinations and displaying images (picture archiving and communications system, PACS). <b>Main outcome measures</b> Requests per inpatient, outpatient, or day case patient for full blood count, urine culture and urea and electrolytes tests, and plain x ray film, computed tomography, and ultrasonography examinations. <b>Results</b> CPOE was associated with a reduction in the proportion of outpatient appointments at which full blood count (odds ratio 0.25, 95% confidence interval 0.16 to 0.40), urea and electrolytes (0.55, 0.39 to 0.77), and urine culture (0.30, 0.17 to 0.51) tests were ordered, and at which full blood count tests were repeated (0.73, 0.53 to 0.99). Conversely, the same system was associated with an almost fourfold increase in the use of urea and electrolytes tests among day case patients (3.63, 1.66 to 7.94). PACS was associated with a reduction in repeat plain x ray films at outpatient appointments (0.62, 0.44 to 0.88) and a reduction in inpatient computed tomography (0.83, 0.70 to 0.98). Conversely, it was associated with increases in computed tomography requested at outpatient appointments (1.89, 1.26 to 2.84) and computed tomography repeated within 48 hours during an inpatient stay (2.18, 1.52 to 3.14). <b>Conclusions</b> CPOE and PACS were associated with both increases and reductions in tests and examinations. The magnitude of the changes is potentially important with respect to the efficiency of provision of health care. Better information about the impact of modern IT is required to enable healthcare organisations to manage implementation optimally.
DOI: 10.1093/ije/26.5.1080
1997
Cited 54 times
A review of data-derived methods for assigning causes of death from verbal autopsy data
BACKGROUND: Verbal autopsy (VA) is an indirect method for estimating cause-specific mortality. In most previous studies, cause of death has been assigned from verbal autopsy data using expert algorithms or by physician review. Both of these methods may have poor validity. In addition, physician review is time consuming and has to be carried out by doctors. A range of methods exist for deriving classification rules from data. Such rules are quick and simple to apply and in many situations perform as well as experts. METHODS: This paper has two aims. First, it considers the advantages and disadvantages of the three main methods for deriving classification rules empirically; (a) linear and other discriminant techniques, (b) probability density estimation and (c) decision trees and rule-based methods. Second, it reviews the factors which need to be taken into account when choosing a classification method for assigning cause of death from VA data. RESULTS: Four main factors influence the choice of classification method: (a) the purpose for which a classifier is being developed, (b) the number of validated causes of death assigned to each case, (c) the characteristics of the VA data and (d) the need for a classifier to be comprehensible. When the objective is to estimate mortality from a single cause of death, logistic regression should be used. When the objective is to determine patterns of mortality, the choice of method will depend on the above factors in ways which are elaborated in the paper. CONCLUSION: Choice of classification method for assigning cause of death needs to be considered when designing a VA validation study. Comparison of the performance of classifiers derived using different methods requires a large VA dataset, which is not currently available.
DOI: 10.1016/s0003-4975(02)04025-0
2002
Cited 53 times
Effect of off-pump coronary surgery with right ventricular assist device on organ function and inflammatory response: a randomized controlled trial
Right ventricular assist devices (RVADs) have been proposed to improve exposure of the coronary arteries in off-pump surgery. In this study we investigated the impact of the A-Med RVAD on inflammatory response and organ function in patients undergoing coronary artery bypass grafting.Sixty patients were prospectively randomized to conventional surgery with cardiopulmonary bypass (CPB) and cardioplegic arrest, beating heart surgery (off-pump), or beating heart surgery with the RVAD. Serial blood samples were collected postoperatively, for analysis of inflammatory markers, troponin I, protein S100, and free hemoglobin. Renal tubular function was assessed by measuring urine N-acetyl-glucosaminidase activity.No hospital deaths or major postoperative complications occurred in the study population. Interleukin-6, interleukin-8, C3a, and troponin I levels after surgery were significantly higher in the CPB group compared with the off-pump and RVAD groups. Free hemoglobin levels immediately after the operation, peak and total S100 levels, and N-acetyl-glucosaminidase activity were also significantly higher in the CPB group.Off-pump coronary revascularization, with or without RVAD, reduces inflammatory response, myocardial, neurologic, and renal injury, and decreases hemolysis when compared with conventional surgery with CPB and cardioplegic arrest.
DOI: 10.1053/jpsu.2003.50121
2003
Cited 46 times
Randomized controlled trials in pediatric surgery: Could we do better?
Randomized controlled trials (RCTs) are accepted as the gold standard for assessing the effectiveness of clinical interventions but are rarely reported in pediatric surgery. Have RCTs submitted to the British Association of Paediatric Surgeons (BAPS) Annual Congress during the last 5 years been adequately designed and large enough to produce a valid result?Abstracts accepted by the Annual BAPS Congress meetings between 1996 and 2000 were examined in collaboration with a senior health services researcher. The quality of the design, methodology, statistical analysis and conclusions, and the adequacy of the sample size were assessed for all identifiable clinical RCTs.From 760 accepted abstracts, there were only 9 RCTs (1%) of clinical interventions. In only 4 trials was the relevant primary end-point specified at the outset of the study, and none documented the method of randomization. Only one abstract mentioned blinding with respect to the intervention or outcome measure. Sample sizes were inadequate to detect even large clinical differences. To date, only one of these RCTs has been published in an English-language, peer-reviewed journal.Clear guidelines exist for the conduct of RCTs, yet compliance with these standards was rarely documented in abstracts of pediatric surgical RCTs presented at BAPS. Sample sizes were inadequate. RCTs in pediatric surgery are difficult to perform, but the specialty would benefit from well-designed, carefully conducted, multicentre, clinical RCTs to advance evidence-based practice.
DOI: 10.1016/j.athoracsur.2003.10.127
2004
Cited 45 times
Trainees operating on high-risk patients without cardiopulmonary bypass: a high-risk strategy?
The safety of teaching off-pump coronary artery bypass grafting to trainees is best tested in high-risk patients, who are more likely to experience significant morbidity after surgery. This study compared outcomes of off-pump coronary artery bypass grafting operations performed by consultants and trainees in high-risk patients.Data for consecutive patients undergoing off-pump coronary artery bypass grafting were collected prospectively. Patients satisfying at least one of the following criteria were classified as high-risk: age older than 75 years, ejection fraction less than 0.30, myocardial infarction in the previous month, current congestive heart failure, previous cerebrovascular accident, creatinine greater than 150 micromol/L, respiratory impairment, peripheral vascular disease, previous cardiac surgery, and left main stem stenosis greater than 50%. Early morbidity, 30-day mortality, and late survival were compared.From April 1996 to December 2002, 686 high-risk patients underwent off-pump coronary artery bypass grafting revascularization. Operations by five consultants (416; 61%) and four trainees (239; 35%) were the focus of subsequent analyses. Nine visiting or research fellows performed the other 31 operations. Prognostic factors were more favorable in trainee-led operations. On average, consultants and trainees grafted the same number of vessels. There were 18 (4.3%) and 5 (1.9%) deaths within 30 days, and 14 (3.4%) and 5 (1.9%) myocardial infarctions in consultant and trainee groups, respectively. After adjusting for imbalances in prognostic factors, odd ratios for almost all adverse outcomes implied no increased risk with trainee operators, although patients operated on by trainees had longer postoperative stays and were more likely to have a red blood cell transfusion. Kaplan-Meier cumulative mortality estimates at 24-month follow-up were 10.5% (95% confidence interval, 7.7% to 14.2%) and 6.4% (95% confidence interval, 3.8% to 10.9%) in consultant and trainee groups, respectively (hazard ratio = 0.60 [95% confidence interval, 0.37 to 0.99]; p = 0.05).Off-pump coronary artery bypass grafting surgery in high-risk patients can be safely performed by trainees.
DOI: 10.1111/j.1475-1313.1987.tb00776.x
1987
Cited 45 times
THE CLINICAL SIGNIFICANCE OF CHANGE*
Abstract— Clinicians frequently make judgements about the clinical importance of a change in “score” on visual function tests obtained from patients on successive visits. Almost no normative data for assessing the significance of change in performance on routinely used clinical tests exists, and the importance of collecting such data is emphasized. A description of how normative data for the significance of change can be collected is given, using the Farnsworth‐Munsell 100 hue test as an example. Although there are always limitations to estimates of error variance, the importance of determining the precision of clinical tests, for guiding rational decision making, is stressed.
DOI: 10.1136/bjo.69.12.897
1985
Cited 41 times
Detection of optic nerve damage in ocular hypertension.
Thirty patients with ocular hypertension were tested for contrast sensitivity loss. Seventeen were not on treatment, and thirteen were receiving some form of pressure reducing therapy. The contrast sensitivity results of 63% of ocular hypertensive eyes were abnormal (greater than 2 SDs from the age matched norm). Thus it appears that contrast sensitivity can detect early visual loss in patients who have normal visual fields and it is suggested that this test might be used as a criterion for therapy in ocular hypertension. There was no significant difference in the intraocular pressures between patients who gave abnormal contrast sensitivity results and those who did not in the untreated group (p greater than 0.05), suggesting that intraocular pressure level is a poor predictor of optic nerve fibre damage in patients with ocular hypertension.
DOI: 10.1701/2990.29928
2018
Cited 25 times
[ROBIS: a new tool to assess risk of bias in systematic reviews was developed.]
To develop ROBIS, a new tool for assessing the risk of bias in systematic reviews (rather than in primary studies).We used four-stage approach to develop ROBIS: define the scope, review the evidence base, hold a face-to-face meeting, and refine the tool through piloting.ROBIS is currently aimed at four broad categories of reviews mainly within health care settings: interventions, diagnosis, prognosis, and etiology. The target audience of ROBIS is primarily guideline developers, authors of overviews of systematic reviews ("reviews of reviews"), and review authors who might want to assess or avoid risk of bias in their reviews. The tool is completed in three phases: 1) assess relevance (optional), 2) identify concerns with the review process, and 3) judge risk of bias. Phase 2 covers four domains through which bias may be introduced into a systematic review: 1) study eligibility criteria; 2) identification and selection of studies; 3) data collection and study appraisal; and 4) synthesis and findings. Phase 3 assesses the overall risk of bias in the interpretation of review findings and whether this considered limitations identified in any of the phase 2 domains. Signaling questions are included to help judge concerns with the review process (phase 2) and the overall risk of bias in the review (phase 3); these questions flag aspects of review design related to the potential for bias and aim to help assessors judge risk of bias in the review process, results, and conclusions.ROBIS is the first rigorously developed tool designed specifically to assess the risk of bias in systematic reviews.
DOI: 10.3310/hta20600
2016
Cited 23 times
A multicentre randomised controlled trial of Transfusion Indication Threshold Reduction on transfusion rates, morbidity and health-care resource use following cardiac surgery (TITRe2)
Background Uncertainty about optimal red blood cell transfusion thresholds in cardiac surgery is reflected in widely varying transfusion rates between surgeons and cardiac centres. Objective To test the hypothesis that a restrictive compared with a liberal threshold for red blood cell transfusion after cardiac surgery reduces post-operative morbidity and health-care costs. Design Multicentre, parallel randomised controlled trial and within-trial cost–utility analysis from a UK NHS and Personal Social Services perspective. We could not blind health-care staff but tried to blind participants. Random allocations were generated by computer and minimised by centre and operation. Setting Seventeen specialist cardiac surgery centres in UK NHS hospitals. Participants Patients aged &gt; 16 years undergoing non-emergency cardiac surgery with post-operative haemoglobin &lt; 9 g/dl. Exclusion criteria were: unwilling to have transfusion owing to beliefs; platelet, red blood cell or clotting disorder; ongoing or recurrent sepsis; and critical limb ischaemia. Interventions Participants in the liberal group were eligible for transfusion immediately after randomisation (post-operative haemoglobin &lt; 9 g/dl); participants in the restrictive group were eligible for transfusion if their post-operative haemoglobin fell to &lt; 7.5 g/dl during the index hospital stay. Main outcome measures The primary outcome was a composite outcome of any serious infectious (sepsis or wound infection) or ischaemic event (permanent stroke, myocardial infarction, gut infarction or acute kidney injury) during the 3 months after randomisation. Events were verified or adjudicated by blinded personnel. Secondary outcomes included blood products transfused; infectious events; ischaemic events; quality of life (European Quality of Life-5 Dimensions); duration of intensive care or high-dependency unit stay; duration of hospital stay; significant pulmonary morbidity; all-cause mortality; resource use, costs and cost-effectiveness. Results We randomised 2007 participants between 15 July 2009 and 18 February 2013; four withdrew, leaving 1000 and 1003 in the restrictive and liberal groups, respectively. Transfusion rates after randomisation were 53.4% (534/1000) and 92.2% (925/1003). The primary outcome occurred in 35.1% (331/944) and 33.0% (317/962) of participants in the restrictive and liberal groups [odds ratio (OR) 1.11, 95% confidence interval (CI) 0.91 to 1.34; p = 0.30], respectively. There were no subgroup effects for the primary outcome, although some sensitivity analyses substantially altered the estimated OR. There were no differences for secondary clinical outcomes except for mortality, with more deaths in the restrictive group (4.2%, 42/1000 vs. 2.6%, 26/1003; hazard ratio 1.64, 95% CI 1.00 to 2.67; p = 0.045). Serious post-operative complications excluding primary outcome events occurred in 35.7% (354/991) and 34.2% (339/991) of participants in the restrictive and liberal groups, respectively. The total cost per participant from surgery to 3 months postoperatively differed little by group, just £182 less (standard error £488) in the restrictive group, largely owing to the difference in red blood cells cost. In the base-case cost-effectiveness results, the point estimate suggested that the restrictive threshold was cost-effective; however, this result was very uncertain partly owing to the negligible difference in quality-adjusted life-years gained. Conclusions A restrictive transfusion threshold is not superior to a liberal threshold after cardiac surgery. This finding supports restrictive transfusion due to reduced consumption and costs of red blood cells. However, secondary findings create uncertainty about recommending restrictive transfusion and prompt a new hypothesis that liberal transfusion may be superior after cardiac surgery. Reanalyses of existing trial datasets, excluding all participants who did not breach the liberal threshold, followed by a meta-analysis of the reanalysed results are the most obvious research steps to address the new hypothesis about the possible harm of red blood cell transfusion. Trial registration Current Controlled Trials ISRCTN70923932. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment ; Vol. 20, No. 60. See the NIHR Journals Library website for further project information.
DOI: 10.1097/00006324-199908000-00022
1999
Cited 47 times
Glaucoma Screening: The Importance of Combining Test Data
The objective of this study was to evaluate the effectiveness of screening tests for primary open angle glaucoma, both singly and in combination, using a decision analysis approach. A range of screening tests were carried out on 145 nonglaucomatous patients and 67 cases of previously undiagnosed glaucoma. Receiver operator characteristic curves were constructed for single test data to show the trade-off between sensitivity and specificity for varying cut-off criteria. The best discriminators of glaucoma were, in rank order: (1) a multiple stimulus static visual field screening test, (2) optic disc cupping, and (3) intraocular pressure. Decision curves were also constructed for various combinations of screening tests, where the inclusion of the tests were based on discriminant analyses. Sensitivities and specificities of more than 0.90 were obtained when visual field screening, optic disc cupping, and intraocular pressure were combined. Data from other tests when combined with these three variables failed to provide a significant improvement in discrimination.
DOI: 10.1136/bjo.78.7.529
1994
Cited 45 times
Acute angle closure glaucoma: relative failure of YAG iridotomy in affected eyes and factors influencing outcome.
The treatment of acute angle closure glaucoma has been influenced by the development of the YAG laser and its ability to perform iridotomies as an outpatient procedure. In this retrospective study the results of YAG iridotomy were compared with surgical peripheral iridectomy. When compared with surgical peripheral iridectomy patients, YAG iridotomy patients were at greater risk of proceeding to further surgery, with this risk being significantly associated with increasing duration of attack. The authors suggest that in selected cases, surgical iridectomy should be given consideration as a primary procedure.
DOI: 10.1159/000080576
2004
Cited 44 times
Guidelines for Reporting Non-Randomised Studies
Non-randomised studies (NRSs) are useful because they allow interventions to be evaluated that are difficult to investigate by randomised controlled trials (RCTs). However, NRSs are more susceptible to bias. The Consolidated Standards of Reporting Trials (CONSORT) statement was established to ensure that researchers report features of RCTs that must be considered when appraising their quality. CONSORT has improved the reporting of key information, highlighting missing key information for users. Researchers have a responsibility to report essential information that allows users to assess the susceptibility of NRS to selection, performance, detection and attrition bias. This paper considers criteria for reporting cohort studies: the rationale behind the CONSORT criteria for reporting of RCTs will be applied to cohort studies. Many of the criteria need no modification but application of others raise difficult issues for cohort studies, e.g.: description and standardisation of control and intervention treatments; description of the method of allocation; choice of prognostic factors to be collected; distinguishing between intended and provided treatments; collection of data on adverse and longterm outcomes; establishing a priori plans for analysis.
DOI: 10.1007/s004170000209
2000
Cited 44 times
The sensitivity and specificity of direct ophthalmoscopic optic disc assessment in screening for glaucoma: a multivariate analysis
DOI: 10.1016/j.jacc.2003.11.056
2004
Cited 43 times
Predictors of new malignant ventricular arrhythmias after coronary surgery
We sought to investigate the relationship between perioperative factors and the occurrence of ventricular tachycardia (VT) and ventricular fibrillation (VF), as well as the impact of VT/VF on early and late mortality. Both VT and VF are rare but serious complications after coronary artery bypass graft surgery (CABG), and their etiology and implications remain uncertain. Data on 4,411 consecutive patients undergoing CABG (1,154 [25.8%] had off-pump surgery) between April 1996 and September 2001 were extracted from a prospective database and analyzed. Odds ratios (ORs) describing associations between possible risk factors and VT/VF were estimated separately. Factors observed to be significantly associated with VT/VF were further investigated using multivariate logistic regression. Sixty-nine patients suffered VT/VF (1.6%). There were 61 (1.4%) in-hospital/30-day deaths, 15 among patients who had postoperative VT/VF (21.7%). Patient factors independently associated with an increase in the odds of VT/VF included age <65 years, female gender, body mass index <25 kg/m2, unstable angina, moderate or poor ejection fraction, and the need for inotropes and an intra-aortic balloon pump (OR 1.72 to 4.47, p < 0.05). After adjustment, off-pump surgery was associated with a substantial but nonsignificant protective effect against VT/VF (OR 0.53, 95% confidence interval [CI] 0.25 to 1.13; p = 0.10). Actuarial survival at two years was 98.2% among patients who had VT/VF and who survived to discharge/30 days, compared with 97.0% for the control group (adjusted hazard ratio 0.96 (95% CI 0.40 to 2.31, p = 0.92). The incidence of VT/VF is low in patients undergoing coronary surgery but is associated with high in-hospital mortality. The late survival of the discharged VT/VF patients compares favorably with that of controls.
DOI: 10.1161/01.cir.0000032259.35784.bf
2002
Cited 42 times
Effectiveness of Coronary Artery Bypass Grafting With or Without Cardiopulmonary Bypass in Overweight Patients
Off-pump coronary artery bypass surgery has been demonstrated to reduce morbidity in elective patients. However, high-risk patients might benefit the most from this surgical procedure. Our goal was to investigate the effectiveness of on-pump and off-pump coronary artery bypass surgery on early clinical outcome in a consecutive series of overweight patients.From April 1996 to April 2001, data on 4321 patients undergoing coronary surgery (mortality 1.4%) were prospectively entered into the Patient Analysis and Tracking System. Data were extracted for all patients with a body mass index > or =25 kg/m(2). A risk-adjusted analysis was performed to assess the effect of surgical technique in the whole overweight cohort. 2844 patients were identified (2261 male, median age 63, interquartile range 56 to 68). Patients undergoing on-pump surgery (2170, 76.3%) were less likely than those undergoing off-pump surgery to have hypercholesterolemia or left main stem disease and were, on average, less obese. However, they were more likely to have unstable angina and to have had a previous myocardial infarction, and they had more extensive coronary disease and received more grafts (all P<0.05). Unadjusted analyses, taking account only of consultant team, showed significant benefits of off-pump surgery in terms of hospital deaths, arrhythmias, inotropic use, use of intra-aortic balloon pump, blood loss, transfusion requirement, postoperative hemoglobin, chest infections, neurological complications, intensive care unit and hospital stay (all P<0.05). After adjustment for confounding prognostic factors, the benefits of off-pump surgery were still significant for death in hospital, transfusion requirement, postoperative hemoglobin, neurological complications, intensive care unit and hospital stay (ORs 0.35 to 0.79, P<0.05).These results suggest that off-pump surgery is safe and effective and is associated with a reduced in-hospital mortality and morbidity in overweight patients when compared with conventional coronary surgery with cardiopulmonary bypass and cardioplegic arrest.
DOI: 10.1111/j.1365-2753.2003.00448.x
2004
Cited 42 times
Equity and need when waiting for total hip replacement surgery
To explore sociodemographic and health status factors associated with waiting times both for first outpatient appointment and for total hip replacement surgery (THR).A survey of THR in five former English regions was conducted between September 1996 and October 1997. Every patient listed for THR was asked to fill out a questionnaire preoperatively. This questionnaire included the 12-item Oxford Hip Score (OHS) questionnaire and two questions on the length of time patients waited for an outpatient appointment and subsequently for their operation.From multiple logistic regression analyses, region, private vs. public sector, housing tenure and preoperative OHS were all independently associated with a waiting time for an outpatient appointment for > 3 months. Region, housing tenure and gender were also independently associated with a wait of >or= 6 months on the surgical waiting list.A large proportion of patients had long waiting times both for an outpatient appointment and while on a surgical waiting list. There were significant differences in waiting time according to social, geographical and health care system factors. Patients with a worse pain and disability at surgery waited longer for an outpatient appointment. The longer patient waited, the worse was their pain and disability, suggesting that patients were not prioritized by these criteria. Benefits of prioritizing should be tested.
DOI: 10.1023/b:qure.0000018489.25151.e1
2004
Cited 41 times
A comparison of Rasch with Likert scoring to discriminate between patients' evaluations of total hip replacement surgery
DOI: 10.1108/jhom-08-2015-0121
2015
Cited 22 times
Investigating healthcare IT innovations: a “conceptual blending” approach
Purpose – The purpose of this paper is to better understand how and why adoption and implementation of healthcare IT innovations occur. The authors examine two IT applications, computerised physician order entry (CPOE) and picture archiving and communication systems (PACS) at the meso and micro levels, within the context of the National Programme for IT in the English National Health Service (NHS). Design/methodology/approach – To analyse these multi-level dynamics, the authors blend Rogers’ diffusion of innovations theory (DoIT) with Webster’s sociological critique of technological innovation in medicine and healthcare systems to illuminate a wider range of interacting factors. Qualitative data collected between 2004 and 2006 uses semi-structured, in-depth interviews with 72 stakeholders across four English NHS hospital trusts. Findings – Overall, PACS was more successfully implemented (fully or partially in three out of four trusts) than CPOE (implemented in one trust only). Factors such as perceived benefit to users and attributes of the application – in particular speed, ease of use, reliability and flexibility and levels of readiness – were highly relevant but their influence was modulated through interaction with complex structural and relational issues. Practical implications – Results reveal that combining contextual system level theories with DoIT increases understanding of real-life processes underpinning implementation of IT innovations within healthcare. They also highlight important drivers affecting success of implementation, including socio-political factors, the social body of practice and degree of “co-construction” between designers and end-users. Originality/value – The originality of the study partly rests on its methodological innovativeness and its value on critical insights afforded into understanding complex IT implementation programmes.
DOI: 10.1302/0301-620x.83b6.11659
2001
Cited 41 times
Radiological features predictive of aseptic loosening in cemented Charnley femoral stems
The radiological features of the cement mantle around total hip replacements (THRs) have been used to assess aseptic loosening. In this case-control study we investigated the risk of failure of THR as predictable by a range of such features using data from patients recruited to the Trent Regional Arthroplasty Study (TRAS). An independent radiological assessment was undertaken on Charnley THRs with aseptic loosening within five years of surgery and on a control group from the TRAS database. Chi-squared tests were used to test the probability of obtaining the observed data by chance, and odds ratios were calculated to estimate the strength of association for different features. Several features were associated with a clinically important increase (>twofold) in the risk of loosening, which was statistically significant for four features (p < 0.01). Inadequate cementation (Barrack C and D grades) was the most significant feature, with an estimated odds ratio of 9.5 (95% confidence interval 3.2 to 28.4, p < 0.0001) for failure.
DOI: 10.1016/s0003-4975(99)00717-1
1999
Cited 40 times
Patient–prosthesis mismatch is negligible with modern small-size aortic valve prostheses
Background. Concern has been raised about residual significant gradients when small aortic prostheses are used, particularly in patients with large body surface areas. We studied the performance of six types of small aortic prostheses using dobutamine stress echocardiography.Methods. Sixty-three patients (mean age, 67 ± 7 years) who had undergone aortic valve replacement 17 ± 6 months previously were studied. Two bileaflet mechanical prostheses (St. Jude Medical and CarboMedics: sizes, 19 mm and 21 mm) and two biological prostheses (Medtronic Intact and St. Jude BioImplant: size, 21 mm) were evaluated. A graded infusion of dobutamine was given and Doppler studies of valve performance were carried out.Results. All prostheses except one biological valve had acceptable hemodynamic performance under stress. Using regression modeling, gradient at rest was the only variable found to predict gradient under stress (p < 0.001). Moreover, the most important predictor of gradient at rest was valve design, which accounted for 72% of the variance (p < 0.001). This relationship was independent of valve size (19 mm or 21 mm) or material (ie, mechanical or biological). Body surface area accounted for 4% of the variance in gradient only.Conclusions. The main predictor of transprosthetic gradient is the inherent characteristics of each particular prosthesis, with relatively insignificant contribution from variations in body surface area. Patient–prosthesis mismatch is not a problem of clinical significance when certain modern valve prostheses are used.
DOI: 10.1111/codi.16850
2024
Development and pilot testing of a patient‐reported outcome measure to assess symptoms of parastomal hernia
The aim was to develop and pilot a patient-reported outcome measure (PROM) to assess symptoms of parastomal hernia (PSH).Standard questionnaire development was undertaken (phases 1-3). An initial list of questionnaire domains was identified from validated colorectal cancer PROMs and from semi-structured interviews with patients with a PSH and health professionals (phase 1). Domains were operationalized into items in a provisional questionnaire, and 'think-aloud' patient interviews explored face validity and acceptability (phase 2). The updated questionnaire was piloted in patients with a stoma who had undergone colorectal surgery and had a computed tomography scan available for review. Patient-reported symptoms were examined in relation to PSH (phase 3). Three sources determined PSH presence: (i) data about PSH presence recorded in hospital notes, (ii) independent expert review of the computed tomography scan and (iii) patient report of being informed of a PSH by a health professional.For phase 1, 169 and 127 domains were identified from 70 PROMs and 29 interviews respectively. In phase 2, 14 domains specific to PSH were identified and operationalized into questionnaire items. Think-aloud interviews led to three minor modifications. In phase 3, 44 completed questionnaires were obtained. Missing data were few: 5/660 items. PSH symptom scores associated with PSH presence varied between different data sources. The scale with the most consistent differences between PSH presence and absence and all data sources was the stoma appearance scale.A PROM to examine the symptoms of PSH has been developed from the literature and views of key informants. Although preliminary testing shows it to be understandable and acceptable it is uncertain if it is sensitive to PSH-specific symptoms and further psychometric testing is needed.
DOI: 10.1001/jamaophthalmol.2023.6717
2024
Uncertain Diagnostic Accuracy of Self-Monitoring Vision at Home
Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue," you are agreeing to our Cookie Policy | Continue JAMA Ophthalmology HomeNew OnlineCurrent IssueFor Authors Podcast Journals JAMA JAMA Network Open JAMA Cardiology JAMA Dermatology JAMA Health Forum JAMA Internal Medicine JAMA Neurology JAMA Oncology JAMA Ophthalmology JAMA Otolaryngology–Head & Neck Surgery JAMA Pediatrics JAMA Psychiatry JAMA Surgery Archives of Neurology & Psychiatry (1919-1959) JN Learning / CMESubscribeJobsInstitutions / LibrariansReprints & Permissions Terms of Use | Privacy Policy | Accessibility Statement 2024 American Medical Association. All Rights Reserved Search All JAMA JAMA Network Open JAMA Cardiology JAMA Dermatology JAMA Forum Archive JAMA Health Forum JAMA Internal Medicine JAMA Neurology JAMA Oncology JAMA Ophthalmology JAMA Otolaryngology–Head & Neck Surgery JAMA Pediatrics JAMA Psychiatry JAMA Surgery Archives of Neurology & Psychiatry Input Search Term Sign In Individual Sign In Sign inCreate an Account Access through your institution Sign In Purchase Options: Buy this article Rent this article Subscribe to the JAMA Ophthalmology journal
DOI: 10.1016/s0275-5408(99)00092-7
2000
Cited 34 times
Observer variability in optic disc assessment: implications for glaucoma shared care
Demonstrating that optometrists can make valid and reliable assessments of optic disc features is an important prerequisite for establishing schemes for shared care/co-management. Previous studies have estimated observer variability among experts in the assessment of optic disc cupping, but there has been a paucity of information on observer variability amongst optometrists. This paper describes a study to investigate intra- and inter-observer variability for a range of disc features, as graded by both ophthalmologists and optometrists. Five observers (three optometrists and two ophthalmologists) graded 48 stereo-pairs of optic disc photographs from 48 patients on two separate occasions. Each observer graded the following features: vertical and horizontal C/D ratios, narrowest rim width, the presence/absence of a disc haemorrhage, focal pallor of the neuroretinal rim, peri-papillary atrophy, the steepness of the cup-edge and the presence/absence of the cribriform sign. The average intra- and inter-observer standard deviation (SD) of differences are, respectively, 0.11 and 0.19 for the vertical C/D ratios and 0.10 and 0.18 for the horizontal C/D ratios. For the vertical C/D ratio the average weighted kappa (kappa w) is 0.79 within observers and 0.46 between observers. Percentage agreements for the presence/absence of a disc haemorrhage range from 96 to 100% (average kappa = 0.92) within observers and from 90 to 98% (average kappa = 0.77) between observers. For other disc features, average kappa w values range from 0.67 to 0.71 within observers and from 0.23 to 0.46 between observers. Intra- and inter-observer comparisons (within and between different professionals) across all disc features are comparable for the optometrists and ophthalmologists, thus demonstrating that optometrists can make valid assessments of disc features. The implications for shared care are discussed.
1999
Cited 34 times
Compliance with methodological standards when evaluating ophthalmic diagnostic tests.
To draw attention to the importance of methodological standards when carrying out evaluations of ophthalmic diagnostic tests by reviewing the extent of compliance with these standards in reports of evaluations published within the ophthalmic literature.Twenty published evaluations of ophthalmic screening/diagnostic tests or technologies were independently assessed by two reviewers for compliance with the following methodological standards: specification of the spectrum composition for populations used in the evaluation, analysis of pertinent subgroups, avoidance of work-up (verification) bias, avoidance of review bias, presentation of precision of results for test accuracy, presentation of indeterminate test results, and presentation of test reproducibility.Compliance ranged from just 10% (95%CI, 1%-32%) for presentation of test reproducibility data and avoidance of review bias to 70% (95%CI, 46%-88%) for avoidance of work-up bias and presentation of indeterminate test results. Only 5 of the 20 evaluations complied with four or more of the methodological standards and none with more than five of the standards.The evaluations of ophthalmic diagnostic tests discussed in this article show limited compliance with accepted methodological standards but are no worse than previously described for evaluations published in general medical journals. Adherence to these standards by researchers can improve the study design and reporting of evaluations of new diagnostic techniques. Limited compliance, combined with a lack of awareness of the standards among users of research evidence, may lead to the inappropriate adoption of new diagnostic technologies, with a consequent waste of health care resources.
DOI: 10.1302/0301-620x.88b6.17334
2006
Cited 25 times
Continuous monitoring of the performance of hip prostheses
New brands of joint prosthesis are released for general implantation with limited evidence of their long-term performance in patients. The CUSUM continuous monitoring method is a statistical testing procedure which could be used to provide prospective evaluation of brands as soon as implantation in patients begins and give early warning of poor performance. We describe the CUSUM and illustrate the potential value of this monitoring tool by applying it retrospectively to the 3M Capital Hip experience. The results show that if the clinical data and methodology had been available, the CUSUM would have given an alert to the underperformance of this prosthesis almost four years before the issue of a Hazard Notice by the Medical Devices Agency. This indicates that the CUSUM can be a valuable tool in monitoring joint prostheses, subject to timely and complete collection of data. Regional or national joint registries provide an opportunity for future centralised, continuous monitoring of all hip and knee prostheses using these techniques.
DOI: 10.1111/j.1365-3156.2008.02191.x
2009
Cited 23 times
Community intervention to promote rational treatment of acute respiratory infection in rural Nepal
To evaluate a community education program about treatment of acute respiratory infection (ARI).First, community case definitions for severe and mild ARI were developed. The intervention was then evaluated using a controlled before-and-after design. Household surveys collected data about ARI treatment in 20 clusters, each based around a school and health facility. Treatment indicators included percentages of cases attending health facilities and receiving antibiotics. The intervention consisted of an education program in schools culminating in street theater performances, discussions with mothers after performances and training for community leaders and drug retailers by paramedics. The intervention was conducted in mid-2003. Indicators were measured before the intervention in Nov/Dec 2002 and again in Dec 2003/Jan 2004.Two thousand and seven hundred and nineteen households were surveyed and 3654 under-fives were identified, of whom 377 had severe ARI. After implementing the intervention, health post (HP) attendance rose by 13% in under-fives with severe ARI and fell by 9% in under-fives with mild ARI (test of interaction, P = 0.01). Use of prescribed antibiotics increased in under-fives with severe ARI by 21% but only by 1% in under-fives with mild ARI (test of interaction, P = 0.38). Irrespective of ARI severity, the use of non-prescribed antibiotics dropped by 5% (P = 0.002), and consultation with female community health volunteers (FCHVs)and use of safe home remedies increased by 6.7% (P not estimated) and 5.7% (P = 0.008) respectively.The intervention was implemented using local structures and in difficult circumstances, yet had a moderate impact. Thus it has the potential to effect large scale changes in behaviour and merits replication elsewhere.
DOI: 10.1177/0267659120946731
2020
Cited 12 times
Conventional versus minimally invasive extracorporeal circulation in patients undergoing cardiac surgery: protocol for a randomised controlled trial (COMICS)
Despite low mortality, cardiac surgery patients may experience serious life-threatening post-operative complications, often due to extracorporeal circulation and reperfusion. Miniaturised cardiopulmonary bypass (minimally invasive extracorporeal circulation) has been developed aiming to reduce the risk of post-operative complications arising with conventional extracorporeal circulation.The COMICS trial is a multi-centre, international, two-group parallel randomised controlled trial testing whether type II, III or IV minimally invasive extracorporeal circulation is effective and cost-effective compared to conventional extracorporeal circulation in patients undergoing elective or urgent coronary artery bypass grafting, aortic valve replacement or coronary artery bypass grafting + aortic valve replacement. Randomisation (1:1 ratio) is concealed and stratified by centre and surgical procedure. The primary outcome is a composite of 12 serious complications, objectively defined or adjudicated, 30 days after surgery. Secondary outcomes (at 30 days) include other serious adverse events (primary safety outcome), use of blood products, length of intensive care and hospital stay and generic health status (also at 90 days).Two centres started recruiting on 08 May 2018; 10 are currently recruiting and 603 patients have been randomised (11 May 2020). The recruitment rate from 01 April 2019 to 31 March 2020 was 40-50 patients/month. About 80% have had coronary artery bypass grafting only. Adherence to allocation is good.The trial is feasible but criteria for progressing to a full trial were not met on time. The Trial Steering and Data Monitoring Committees have recommended that the trial should currently continue.
DOI: 10.1016/s0140-6736(99)90413-0
1999
Cited 33 times
Health-technology assessment in surgery
The term health technology is intended to include “all methods used by health-care professionals to promote health, to prevent and treat disease, and to improve rehabilitation and long-term care. The broad definition of a health technology1NHS Management Executive Assessing the effects of health technologies. Department of Health, London1992Google Scholar means that a wide range of health-care measures can be considered to be surgical technologies. Health-technology assessment in surgery therefore includes comparisons of surgery with no surgery or best medical treatment, of alternative surgical procedures, and of alternative non-surgical adjuvant therapies (panel 1).Panel 1Comparisons in surgeryTabled 1Surgery vs best medical treatment or no treatmentSurgical discectomy vs chemonucleolysis vs placebo for lumbar-disc prolapseOvarian ablation in early breast cancer vs no ablationSurgery vs conservative treatment for meniscal injuries of the knee in adultsComparison of alternative surgical proceduresPartial meniscectomy vs total meniscectomy for meniscal injuries of the knee in adultsReplacement arthroplasty vs internal fixation for extracapsular hip fracturesAlternative surgical treatments for cervical intraepithelial neoplasiaAdjuvant surgical technologiesAdjuvant chemotherapy vsno chemotherapy for localised resectable soft-tissue sarcoma of adultsInterventions for preventing blood loss in the treatment of cervical intraepithelial neoplasiaPre-operative traction for fractures of the proximal femurReviews of examples cited are in Cochrane Database of Systematic Reviews7The Cochrane Library Oxford: Update Software.1999http://www.update-software.com/ccweb/cochrane/cdsr.htmGoogle Scholar Open table in a new tab The need for health-technology assessment is widely acknowledged. Describing the effects of both new and established health-care interventions is important because they have financial costs, and sometimes side-effects or complications, as well as benefits. The balance between benefits and costs influences whether or not an intervention is adopted widely. New techniques are sometimes widely implemented and only subsequently found to have no advantage, or even to be less effective, than those that they were intended to supplant.A general lack of high-quality evidence about the effectiveness of surgical techniques means that examples of ineffective or harmful ones that were adopted without evaluation are harder to identify than in other areas of health care. One example is the introduction of first-trimester amniocentesis instead of chorionic villus sampling (CVS) for fetal karyotyping. Equivocal evidence suggested that second-trimester amniocentesis resulted in better pregnancy outcomes than did first-trimester CVS, and first-trimester amniocentesis was introduced in the late 1980s. However, in 1994, a randomised controlled trial reported no difference in total deaths (3·6% fewer for CVS, 95% CI -0·8% to 8·0%), but significantly fewer spontaneous deaths with CVS (4·6% fewer for CVS, 95% CI 1·4 to 8·0).2Nicolaides K de Lourdes Brizot M Patel F Snijders R Comparison of chorionic villus sampling and amniocentesis for fetal karyotyping at 10–13 weeks' gestation.Lancet. 1994; 344: 435-439Abstract PubMed Scopus (175) Google ScholarA second example is the use of the gamma nail, which was introduced in the late 1980s for fixation of extracapsular hip fractures. This nail was thought to have theoretical advantages over the established fixation device, the sliding hip screw, but a systematic review of ten randomised controlled trials has shown the nail to be associated with an increased risk of operative fracture of the femur (odds ratio 4·48, 95% CI 2·12 to 9·48), and with later fracture of the femur and re-operation.3Parker MJ Handoll HHG Robinson CM Gamma nail versus sliding hip screw for extracapsular hip fractures. Cochrane Library, Oxford1999Google ScholarLarge variations in hospital-admission rates and surgical practice are a general indication of differences in opinion about the effects of an intervention. Although other factors, such as a tendency to practise defensively because of fear of litigation, may contribute to such variation, collective uncertainty about the effectiveness of an intervention is almost certainly a major factor. A commonly cited example of this point is the three-fold difference between and within countries in the frequency of caesarean section.4Lomas J Enkin M Variations in operative delivery rates.in: Chalmers I Enkin M MJNC Keirse Effective care in pregnancy and childbirth. Oxford University Press, Oxford1989: 1182-1195Google Scholar Although collective uncertainty can be taken as an indication of the need for an assessment of the effects of an intervention, surgeons rarely admit to uncertainty individually—a reluctance that can be a major obstacle to persuading surgeons to participate in randomised controlled trials (see below).Scope of health-technology assessmentHealth-technology assessment includes a wider range of activities than simply primary evaluations of defined techniques. First, it is necessary to prioritise technologies for assessment, since there are insufficient resources for the assessment of all unevaluated and novel technologies. Second, several primary evaluations of a technology may be required to provide a clear and comprehensive picture of its effects1NHS Management Executive Assessing the effects of health technologies. Department of Health, London1992Google Scholar. Individual studies commonly include too few patients to produce a definitive answer, and the effects of a technology are often smaller than anticipated, yet clinically important. It is risky to generalise from a single study, particularly in surgery. Individual studies are not always able to assess the full range of clinical and patient-related outcomes, economic outcomes, short-term and long-term effects, and possible harm as well as benefit (panel 2). Weighing up the importance of different outcomes, such as quality and length of life, is a continuing challenge.5Billingham LJ Abrams K Jones DR Quality of life assessment and survival data.in: Black N Brazier J Fitzpatrick R Reeves BC Health services research methods: a guide to best practice. BMJ Books, London1998: 163-172Google Scholar Assessment of a new health technology also often involves other issues, such as humanity, equity, and ethics, and, in some cases, legal considerations.Panel 2Effects of health technology to be assessedTabled 1Clinical outcomes: benefit,Short and long-term cure, absence/reduction in clinical signs, return of biochemical and physiological measures to normal valuesClinical outcomes: harmMortality, complications, and other adverse events attributable to the technology being assessedPatient outcomes: benefitShort-term and long-term absence/reduction in symptoms, increased ability to perform activities of daily living, increased quality and length of lifeEconomic outcomesUse of hospital, primary care, social services resources and costs, and resource use and costs that fall on patientsImpact on other services,Consequences of implementing new health technologies on other health services Open table in a new tab Health-technology assessment therefore needs to include systematic review and synthesis of a range of types of evidence of the effects of an intervention.6Chalmers I Hetherington J Elbourne D Keirse MJNC Enkin M Materials and methods used in synthesizing evidence to evaluate the effects of care during pregnancy and childbirth.in: Chalmers I Enkin M MJNC Keirse Effective care in pregnancy and childbirth. Oxford University Press, Oxford1989: 39-65Google Scholar If health-technology assessment is to improve health care for patients, there must also be institutions responsible for disseminating high-quality evidence to relevant target audiences, to promote the uptake of effective measures and discontinuation of ineffective or harmful ones. In the UK, the Centre for Reviews and Dissemination at the University of York, funded by the National Health Service Research and Development programme, fulfils this role. A database of abstracts of reviews of effectiveness prepared by the Centre is available in the Cochrane Library7The Cochrane Library Oxford: Update Software.1999http://www.update-software.com/ccweb/cochrane/cdsr.htmGoogle Scholar, which is available on the Internet and on CD-ROM. Internationally, members of the Cochrane Collaboration are continually updating systematic reviews of the effects of health technologies. Such reviews can contribute to the establishment of clinical guidelines, many of which have been produced by organisations such as surgical specialty associations, which make explicit the quality of the evidence on which specific recommendations are based8Guidelines for the management of colorectal cancer. Royal College of Surgeons of England and Association of Coloproctology of Great Britain and Ireland, London1996Google ScholarProblems that surgical procedures pose for health-technology assessmentProblems can arise at each stage in the assessment of surgical procedures. At the very start of the process, it is difficult to know when to give a new procedure priority for evaluation. If an assessment is done too early, before surgeons have mastered the technique, there is a risk of rejection of an effective procedure. If too late, the technique may have diffused and become established, by which time surgeons will consider it unethical to withhold the procedure. The uptake of minimally invasive surgical techniques provides an example of some of the problems that can arise. Laparoscopic cholecystectomy was adopted in preference to minicholecystectomy by many surgeons without evidence of its effectiveness and is believed by many to have resulted in a higher rate of bileduct injuries while surgeons were learning the technique. Formal evaluations were hampered by widespread optimism about the effectiveness of the minimally invasive approach, which was subsequently found to be exaggerated.9Downs SH Black NA Devlin HB Royston CMS Russell RCG Systematic review of the effectiveness and safety of laparoscopic cholecystectomy.Ann Roy Coll Surg Engl. 1996; 78: 241-323Google ScholarThe preferred study design for primary assessments of the effects of a procedure is the randomised controlled trial;10Guyatt GH Sackett DL Cook DJ Users' guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid?.JAMA. 1993; 270: 2598-2601Crossref PubMed Scopus (980) Google Scholar this design is the most likely to result in similarity of the groups being compared and so to minimise confounding by differences in known and unknown prognostic factors. This property is important for two main reasons. First, small effects of similar size to those arising from bias and confounding are often clinically important;11Peto R Collins R Gray R Large-scale randomized evidence: large, simple trials and overviews of trials.Ann N Y Acad Sci. 1993; 703: 314-340Crossref PubMed Scopus (57) Google Scholar Second, quantiyfing with adequate precision the effect of an intervention, rather than merely reporting whether or not it is significantly different from an alternative, is essential for weighing up the relative magnitudes of the benefits and the costs.Randomised controlled trials are more straightforward to conduct for the assessment of therapies adjuvant to surgery than for the comparison of alternative surgical procedures, for the assessment of complex means of promoting recuperation (such as therapeutic or educational interventions to improve mobilisation or to reduce anxiety), or for the assessment of features of service delivery (such as the extent to which infrequently done or technically advanced surgical procedures should be restricted to specialist centres).12Black N Why we need observational studies to evaluate the effectiveness of health care.BMJ. 1996; 312: 1215-1218Crossref PubMed Scopus (1225) Google Scholar The infrastructure required for a randomised controlled trial is generally expensive to set up, and this rigour of design is not always necessary; the effects of some surgical procedures may be large and unlikely to be confused with sources of bias or confounding.When randomised trials are impracticable, reliance may have to be placed on evidence of effectiveness from non-randomised study designs. Such studies need to be carried out and interpreted with extreme care, since they are highly susceptible to confounding and bias.13Davey Smith G Cross design synthesis: a new strategy for studying medical outcomes?.Lancet. 1992; 340: 944-946Summary PubMed Scopus (140) Google Scholar Nevertheless, there is evidence that non-randomised designs can provide valid estimates of effectiveness if standard epidemiological principles are applied to the design of studies and analyses of data.14Reeves BC MacLehose RM Harvey IM Sheldon TA Russell IT Black AMS Comparisons of effect sizes derived from randomised and non-randomised studies.in: Black N Brazier J Fitzpatrick R Reeves BCA Health services research methods: a guide to best practice. BMJ Books, London1998: 73-85Google Scholar The discrepancies that have been observed between randomised and non-randomised studies can in many cases be attributed to differences in the study populations and residual confounding,15Hlatky MA Facc MD Califf RM et al.Comparison of predictions based on observational data with the results of RCT of coronary artery bypass surgery.J Am Coll Cardiol. 1988; 11: 237-245Summary Full Text PDF PubMed Scopus (108) Google Scholar or to the precise nature of the technique being evaluated, rather than to biases.The possibility that some surgeons may have better outcomes with one procedure and other surgeons with an alternative procedure–Tie, there is an interaction between surgeon and technique—creates a particular difficulty. The theoretically ideal solution is to randomise eligible patients of each participating surgeon to one or other procedure. There are practical drawbacks with this approach since surgeons who prefer one procedure may be unwilling to participate. More importantly, if there is an interaction between surgeon and procedure, pooling the results across surgeons will give a misleading answer, and quantifying the interaction requires a very large sample size. Randomising patients to surgeons who use different procedures, or studying the patients of different surgeons observationally, may represent a pragmatic alternative but addresses a different question—namely, what are the effects of the alternative procedures when carried out by surgeons who prefer them?A lack of randomised controlled trials makes quantitative synthesis of evidence by meta-analysis dangerous, since it is difficult to control for differences in the study populations, the precise nature of the technique evaluated, or the outcomes reported when the results of non-randomised studies are pooled.16Egger M Schneider M Davey Smith G Meta-analysis: spurious precision?.in: Meta-analysis of observational studies. Br Med J. 316. 1998: 140-144Google Scholar The paucity of randomised controlled trials of new surgical procedures means that in the latest update of the Cochrane library there are few systematic reviews of the effects of surgical procedures7The Cochrane Library Oxford: Update Software.1999http://www.update-software.com/ccweb/cochrane/cdsr.htmGoogle Scholar, although there are many more reviews of adjuvant surgical therapies.There can be difficulties in weighing up the benefits and costs of new surgical procedures. Adopting a new surgical technique may not be straightforward, since it is likely to require a surgeon to acquire new practical skills and to develop competence over a number of cases. The costs of mastering a new procedure are likely to be substantial for patients, surgeons, and health services, since surgeons who are learning a new technique typically take longer to carry out a procedure and have a higher rate of complications than do experienced ones. Since the gradient of the learning curve may vary considerably between surgeons, the inclusion of general recommendations in high-quality evidence that is being disseminated can be difficult.Changing of attitudes to health-technology assessment in surgeryThe surgical community has been accused of not recognising the need for high-quality evaluation.17Horton R Surgical research or comic opera: questions, but few answers.Lancet. 1996; 347: 984-985Abstract PubMed Scopus (304) Google Scholar Reviews of surgical journals in 199018Solomon MJ McLeod RS Clinical studies in surgical journals: have we improved?.Dis Colon Rectum. 1993; 36: 43-48Crossref PubMed Scopus (125) Google Scholar and 199617Horton R Surgical research or comic opera: questions, but few answers.Lancet. 1996; 347: 984-985Abstract PubMed Scopus (304) Google Scholar revealed that randomised controlled trials accounted for only 7% of clinical studies, and that the most commonly reported study designs were uncontrolled case-studies or series (84%18Solomon MJ McLeod RS Clinical studies in surgical journals: have we improved?.Dis Colon Rectum. 1993; 36: 43-48Crossref PubMed Scopus (125) Google Scholar and 46%17Horton R Surgical research or comic opera: questions, but few answers.Lancet. 1996; 347: 984-985Abstract PubMed Scopus (304) Google Scholar). Historically, surgery has been largely unregulated, and there have been few obstacles, other than the obtaining of consent of the patients for the operation, to prevent surgeons from introducing innovative practices. By contrast, a scientific evaluation almost always requires approval by an ethics committee, which may seek assurances about the inclusion of a control group, adequacy of the proposed sample size, data collection, and monitoring.The UK Department of Health has recently outlined a process for assessment of new health technologies. The process includes horizon scanning, selection of the most significant technologies for assessment, submission of evidence from sponsoring companies where relevant, and critical review of the evidence by regulatory agencies and the National Institute for Clinical Excellence.19A first class service. Department of Health, London1998Google Scholar However, there is no assurance that this process will solve all of the problems of underevaluation of surgical procedures. What makes a surgical technique new is not always easy to define because surgical procedures generally evolve in small steps, which makes it difficult to decide when a procedure has changed sufficiently to justify formal evaluation.Even when new technologies are given priority for evaluation, the required studies are often difficult to establish. There are now several methods for treating benign prostate hyperplasia, including transurethral resection, and laser, ultrasonographic, microwave, and pharmacological treatments. However, a randomised controlled trial set up in the UK to compare the effectiveness and cost-effectiveness of some of the alternative techniques was halted because sufficient patients could not be recruited.The difficulty of assessing a moving target is illustrated by the changing way in which minimally invasive coronary artery surgery is being used. Evaluation of this procedure was given priority by the UK National Health Service in 1997, after early case-series had indicated the success of a minithoracotomy approach for bypass grafting without the need for extracorporeal circulation for patients with single-Vessel disease, who are usually treated by angioplasty.20Calafiore AM Angelini GD Left anterior small thoracotomy (LAST) for coronary artery revascularisation.Surgery. 1997; 15: 61-63Google Scholar The most important question about this generic technology can be said to have now changed, since some surgeons prefer to use a median sternotomy incision, even for patients with single-vessel disease, and are grafting multiple vessels without the use of extracorporeal cardiopulmonary circulation for patients who would otherwise have undergone standard coronary bypass surgery.The introduction of new prosthetic joints in orthopaedics illustrates another problem—ie, the need to assess long-term outcomes for some procedures. Clinically and economically important differences in the failure rate of alternative prostheses are unlikely to emerge for several years and manufacturers are understandably reluctant to invest in the long-term and expensive evaluations that are needed to show benefit.Devising ways of encouraging surgeons to recognise uncertainty about the effects of surgical procedures and to be less susceptible to the lure of new and expensive technology that has not been fully evaluated probably represents the greatest challenge to health-technology assessment in surgery. A greater awareness of the need to assess surgical technologies should lead to more and higher-quality evaluations of effectiveness, the opportunity to synthesise evidence from individual studies in systematic reviews, and the incorporation of high-quality evidence into guidelines. There also needs to be wider acknowledgment of the difficulty of carrying out randomised trials in some circumstances and a greater appreciation of the potential value of assessments with non-randomised designs when randomised trials prove to be impracticable. The term health technology is intended to include “all methods used by health-care professionals to promote health, to prevent and treat disease, and to improve rehabilitation and long-term care. The broad definition of a health technology1NHS Management Executive Assessing the effects of health technologies. Department of Health, London1992Google Scholar means that a wide range of health-care measures can be considered to be surgical technologies. Health-technology assessment in surgery therefore includes comparisons of surgery with no surgery or best medical treatment, of alternative surgical procedures, and of alternative non-surgical adjuvant therapies (panel 1). Tabled 1Surgery vs best medical treatment or no treatmentSurgical discectomy vs chemonucleolysis vs placebo for lumbar-disc prolapseOvarian ablation in early breast cancer vs no ablationSurgery vs conservative treatment for meniscal injuries of the knee in adultsComparison of alternative surgical proceduresPartial meniscectomy vs total meniscectomy for meniscal injuries of the knee in adultsReplacement arthroplasty vs internal fixation for extracapsular hip fracturesAlternative surgical treatments for cervical intraepithelial neoplasiaAdjuvant surgical technologiesAdjuvant chemotherapy vsno chemotherapy for localised resectable soft-tissue sarcoma of adultsInterventions for preventing blood loss in the treatment of cervical intraepithelial neoplasiaPre-operative traction for fractures of the proximal femurReviews of examples cited are in Cochrane Database of Systematic Reviews7The Cochrane Library Oxford: Update Software.1999http://www.update-software.com/ccweb/cochrane/cdsr.htmGoogle Scholar Open table in a new tab Reviews of examples cited are in Cochrane Database of Systematic Reviews7The Cochrane Library Oxford: Update Software.1999http://www.update-software.com/ccweb/cochrane/cdsr.htmGoogle Scholar The need for health-technology assessment is widely acknowledged. Describing the effects of both new and established health-care interventions is important because they have financial costs, and sometimes side-effects or complications, as well as benefits. The balance between benefits and costs influences whether or not an intervention is adopted widely. New techniques are sometimes widely implemented and only subsequently found to have no advantage, or even to be less effective, than those that they were intended to supplant. A general lack of high-quality evidence about the effectiveness of surgical techniques means that examples of ineffective or harmful ones that were adopted without evaluation are harder to identify than in other areas of health care. One example is the introduction of first-trimester amniocentesis instead of chorionic villus sampling (CVS) for fetal karyotyping. Equivocal evidence suggested that second-trimester amniocentesis resulted in better pregnancy outcomes than did first-trimester CVS, and first-trimester amniocentesis was introduced in the late 1980s. However, in 1994, a randomised controlled trial reported no difference in total deaths (3·6% fewer for CVS, 95% CI -0·8% to 8·0%), but significantly fewer spontaneous deaths with CVS (4·6% fewer for CVS, 95% CI 1·4 to 8·0).2Nicolaides K de Lourdes Brizot M Patel F Snijders R Comparison of chorionic villus sampling and amniocentesis for fetal karyotyping at 10–13 weeks' gestation.Lancet. 1994; 344: 435-439Abstract PubMed Scopus (175) Google Scholar A second example is the use of the gamma nail, which was introduced in the late 1980s for fixation of extracapsular hip fractures. This nail was thought to have theoretical advantages over the established fixation device, the sliding hip screw, but a systematic review of ten randomised controlled trials has shown the nail to be associated with an increased risk of operative fracture of the femur (odds ratio 4·48, 95% CI 2·12 to 9·48), and with later fracture of the femur and re-operation.3Parker MJ Handoll HHG Robinson CM Gamma nail versus sliding hip screw for extracapsular hip fractures. Cochrane Library, Oxford1999Google Scholar Large variations in hospital-admission rates and surgical practice are a general indication of differences in opinion about the effects of an intervention. Although other factors, such as a tendency to practise defensively because of fear of litigation, may contribute to such variation, collective uncertainty about the effectiveness of an intervention is almost certainly a major factor. A commonly cited example of this point is the three-fold difference between and within countries in the frequency of caesarean section.4Lomas J Enkin M Variations in operative delivery rates.in: Chalmers I Enkin M MJNC Keirse Effective care in pregnancy and childbirth. Oxford University Press, Oxford1989: 1182-1195Google Scholar Although collective uncertainty can be taken as an indication of the need for an assessment of the effects of an intervention, surgeons rarely admit to uncertainty individually—a reluctance that can be a major obstacle to persuading surgeons to participate in randomised controlled trials (see below). Scope of health-technology assessmentHealth-technology assessment includes a wider range of activities than simply primary evaluations of defined techniques. First, it is necessary to prioritise technologies for assessment, since there are insufficient resources for the assessment of all unevaluated and novel technologies. Second, several primary evaluations of a technology may be required to provide a clear and comprehensive picture of its effects1NHS Management Executive Assessing the effects of health technologies. Department of Health, London1992Google Scholar. Individual studies commonly include too few patients to produce a definitive answer, and the effects of a technology are often smaller than anticipated, yet clinically important. It is risky to generalise from a single study, particularly in surgery. Individual studies are not always able to assess the full range of clinical and patient-related outcomes, economic outcomes, short-term and long-term effects, and possible harm as well as benefit (panel 2). Weighing up the importance of different outcomes, such as quality and length of life, is a continuing challenge.5Billingham LJ Abrams K Jones DR Quality of life assessment and survival data.in: Black N Brazier J Fitzpatrick R Reeves BC Health services research methods: a guide to best practice. BMJ Books, London1998: 163-172Google Scholar Assessment of a new health technology also often involves other issues, such as humanity, equity, and ethics, and, in some cases, legal considerations.Panel 2Effects of health technology to be assessedTabled 1Clinical outcomes: benefit,Short and long-term cure, absence/reduction in clinical signs, return of biochemical and physiological measures to normal valuesClinical outcomes: harmMortality, complications, and other adverse events attributable to the technology being assessedPatient outcomes: benefitShort-term and long-term absence/reduction in symptoms, increased ability to perform activities of daily living, increased quality and length of lifeEconomic outcomesUse of hospital, primary care, social services resources and costs, and resource use and costs that fall on patientsImpact on other services,Consequences of implementing new health technologies on other health services Open table in a new tab Health-technology assessment therefore needs to include systematic review and synthesis of a range of types of evidence of the effects of an intervention.6Chalmers I Hetherington J Elbourne D Keirse MJNC Enkin M Materials and methods used in synthesizing evidence to evaluate the effects of care during pregnancy and childbirth.in: Chalmers I Enkin M MJNC Keirse Effective care in pregnancy and childbirth. Oxford University Press, Oxford1989: 39-65Google Scholar If health-technology assessment is to improve health care for patients, there must also be institutions responsible for disseminating high-quality evidence to relevant target audiences, to promote the uptake of effect
DOI: 10.1136/bjo.84.10.1198
2000
Cited 30 times
Appraising evaluations of screening/diagnostic tests: the importance of the study populations
Sensitivity and specificity are the indices most commonly reported when describing the performance of a screening or diagnostic test. These indices, and their corresponding predictive values or likelihood ratios, are fundamental test properties since they allow the user to determine the consequences of selecting a particular cut off criterion for referral or further investigative tests. (Sensitivity is the proportion of diseased individuals correctly identified as diseased and specificity is the proportion of non-diseased individuals correctly identified as non-diseased. The positive predictive value (PPV) is the proportion of patients with positive screening test results who are found to have disease and the negative predictive value (NPV) is the proportion of patients with negative screening test results who are found not to have disease, based on the gold standard. Given a test result, a likelihood ratio describes how many times more likely a patient with disease is to have that test result, compared with a patient without the disease.) A recent article1 has highlighted the importance of complying with methodological standards2-4 when evaluating diagnostic or screening tests, in order that the findings of a study can be applied with confidence to clinical practice. These standards need to be considered at both the study design stage and the reporting stage (see Table 1). Standards 1 and 2 are closely related, since they are both concerned with the way in which the sensitivity and specificity of a test may vary depending on the clinical and demographic characteristics of a population (for example, disease stage, age, sex). These standards allow clinicians wishing to use a test to judge whether the sensitivity/specificity reported by the evaluation can be applied to their own population of patients. Standard 3 also relates to a study's population, being concerned with the bias which can arise if only a …
DOI: 10.1097/01.blo.0000137558.97346.fb
2004
Cited 27 times
Systematic Reviews of Nonrandomized Clinical Studies in the Orthopaedic Literature
We systematically reviewed systematic reviews of surgical orthopaedic interventions published between 1996 and 2001 to document when and how nonrandomized studies were included. From more than 10,000 citations examined in various electronic databases, 58 orthopaedic systematic reviews were eligible for inclusion based on specific criteria. Thirty of these (52%) included nonrandomized studies, 15 of which found no randomized controlled trials. Systematic reviews were more likely to include randomized controlled trials if nondistinguishable operations were compared (if participants could be blinded). Only six of the systematic reviews that included nonrandomized studies (20%) assessed the quality of primary studies. Heterogeneity of studies was a major concern. In 21 of the systematic reviews that included nonrandomized studies (70%), data for groups treated similarly were pooled across studies, and outcomes for pooled groups were compared. The conclusions of systematic reviews that included nonrandomized studies are weakened by the limitations of nonrandomized study designs. The absence of established methods for including nonrandomized studies in systematic reviews, and consequently variability in the methods adopted, also limits the comparability of such reviews. Therefore the findings of systematic reviews that include nonrandomized studies should be interpreted with caution.
DOI: 10.1111/j.1369-7625.2007.00482.x
2008
Cited 20 times
What do patients really want? Patients’ preferences for treatment for angina
Abstract Objective To measure preferences for angina treatments among patients admitted from accident and emergency with acute coronary syndrome. Background Evidence suggests variability in treatment allocations amongst certain socio‐demographic groups (e.g. related to age and sex), although it is unclear whether this reflects patient choice, as research on patients’ treatment preferences is sparse. Given current policy emphasis on ‘patient choice’, providers need to anticipate patients’ preferences to plan appropriate and acceptable health services. Design Self‐administered questionnaire survey. Setting In‐patients in a UK hospital. Participants A convenience sample of 53 newly admitted patients with acute coronary syndrome. Exclusion criteria were: a previous cardiologist consultation (including previous revascularization); a clinical judgement of too ill to participate; post‐admission death; non‐cardiac reasons for chest pain. Main outcome measures Patients’ preferences for coronary artery bypass graft (CABG); angioplasty; and two medication alternatives. Results Angioplasty was the preferred treatment (for 80% of respondents), and CABG was second (most preferred by 19%, but second most preferred for 60%). The two least preferred (and least acceptable ) treatments were medications. The majority of patients (83%) would ‘choose treatment based on the extent of benefits’ and ‘accept any treatment, no matter how extreme, to return to health’. There were some differences in preference related to age (&gt;70 years preferred medication to a greater degree than &lt;70 years) and sex (males preferred CABG surgery more than females). Conclusions There was general preference for procedural interventions over medication, but most patients would accept any treatment, however extreme, to return to former health. There was some evidence of differences in preferences related to age and sex. Furthermore, most patients preferred to have some input into treatment choice (e.g. nearly half wanted to share decision responsibility with their doctor), with only 4% preferring to leave the decision entirely to their doctor. Given these findings, and past findings that suggest there may be variability in treatment allocation according to certain socio‐demographic factors, this study suggests a need to develop and use preference measures, and makes a step towards this.
DOI: 10.1016/j.athoracsur.2004.05.010
2004
Cited 22 times
Basal Metabolic State of Hearts of Patients With Congenital Heart Disease: The Effects of Cyanosis, Age, and Pathology
BackgroundExperimental models have established numerous myocardial metabolic changes with chronic hypoxia and maturation. We conducted this study to specifically look at the effects of cyanosis, age, and pathology upon the basal metabolic state of the immature human heart.MethodsOne hundred and eighty-one pediatric patients (37 cyanotic, 144 acyanotic) undergoing open heart surgery were recruited. A myocardial biopsy was collected before ischemia and analyzed for adenine nucleotides, purines, and lactate. The effect of cyanosis was estimated by an analysis of age-matched pairs of children with either ventricular septal defects or tetralogy of Fallot, and by multiple regression modeling. The effects of age and pathology were estimated in acyanotic children also by multiple regression modeling (adjustments were made for baseline differences).ResultsThe only effect of cyanosis was for lactate where the paired t test, and unadjusted and adjusted regression analyses were all consistent (ranging from 1.33 to 1.48 times higher in cyanotic than acyanotic children). The concentrations of adenosine triphosphate (ATP), adenosine diphosphate (ADP), and adenosine monophosphate (AMP) declined with age, whereas the ATP/ADP ratio increased; these associations remained significant even in the adjusted regression analysis. None of the effects of acyanotic pathology were highly significant (p < 0.01), implying that few important metabolic differences were attributable to pathology.ConclusionsCyanosis and age are important factors that determine the basal metabolic state of the pediatric heart. Cyanotic patients have higher myocardial lactate concentrations, whereas young age is associated with lower ATP/ADP ratios and higher adenine nucleotide levels.
DOI: 10.1038/sj.ejcn.1602190
2005
Cited 21 times
Systematic reviews incorporating evidence from nonrandomized study designs: reasons for caution when estimating health effects
Systematic reviews that include nonrandomized studies (NRS) face a number of logistical challenges. However, the greatest threat to the validity of such reviews arises from the differing susceptibility of randomized controlled trials (RCTs) and NRS to selection bias. Groups compared in NRS are unlikely to be balanced because of the reasons leading study participants to adopt different health behaviours or to be treated differentially. Researchers can try to minimize the susceptibility of NRS to selection bias both at the design stage, for example, by matching participants on key prognostic factors, and during data analysis, for example, by regression modelling. However, because of logistical difficulties in matching, imperfect knowledge about the relationships between prognostic factors and between prognostic factors and outcome, and measurement limitations, it is inevitable that estimates of effect size derived from NRS will be confounded to some extent. Researchers, reviewers and users of evidence alike need to be aware of the consequences of residual confounding. In poor quality RCTs, selection bias tends to favour the new treatment being evaluated. Selection bias need not necessarily lead to systematic bias in favour of one treatment but, even if it acts in an unpredictable way, it will still give rise to additional, nonstatistical uncertainty bias around the estimate of effect size. Systematic reviews of NRS studies run the risk of compounding these biases. Nutritional choices and uptake of health education about nutrition are very likely to be associated with potential confounding factors. Therefore, pooled estimates of the effects of nutritional exposures and their confidence intervals are likely to be misleading; reviewers need to take into account both systematic and uncertainty bias.
DOI: 10.1097/opx.0b013e318073c2f2
2007
Cited 17 times
Visual Acuity and Fixation Characteristics in Age-Related Macular Degeneration
To compare "single letter" (SL) acuity, "crowded letter" (CL) acuity, and "repeated letter" (RL) acuity for patients with age-related macular degeneration (AMD) and investigate if differences between these visual acuities are associated with fixation characteristics.A total of 243 patients with AMD had their best-corrected visual acuity measured on an ETDRS chart. SL, CL, and RL acuities were measured using Landolt C targets on a monitor. Fifty-degree-field red-free fundus photographs were taken and a static target was used to calculate the Preferred Retinal Locus (PRL) distance and direction from the fovea. Quality of fixation (consistency and oculomotor response) was also assessed using a fundus camera and a dynamic target.RL acuity was almost always better than CL acuity and SL acuity was almost always better than CL acuity. The mean (+/-SD) RL-CL and SL-CL acuity differences were -0.13 (+/-0.15) logMAR and -0.11 (+/-0.13) logMAR respectively. The median PRL distance was 3.73 degrees and the preferred retinal areas for the location of the PRL were the left (left quadrant of visual field; 39.5% of cases) and superior (inferior quadrant of visual field; 25.4%). Visual acuity was significantly associated with PRL distance but PRL distance only explained 10% of the variation in visual acuity. PRL distance was found to be a significant but weak predictor of the SL-CL acuity difference but fixation quality was not a good predictor of the RL-CL acuity difference.Although the acuity measured under different stimulus conditions varies, the absolute differences are small. This suggests that these techniques would not be helpful in determining fixation characteristics, or predicting the outcome of rehabilitation in individual patients with AMD.
DOI: 10.1016/j.hlc.2011.03.012
2011
Cited 13 times
Mitral Valve Repair or Replacement for Ischaemic Mitral Regurgitation: A Systematic Review
A literature review was undertaken according to Cochrane guidelines to identify whether mitral valve repair (MV-Repair) or replacement (MV-Replacement) is more effective in patients with moderate to severe ischaemic mitral regurgitation. The literature suggests MV-Repair may have improved 30-day mortality and long-term survival. All 12 studies identified, however, were non-randomised, retrospective, and at significant risk of bias due to heterogeneous surgical techniques and mismatched patient characteristics. Data describing the need for reoperation were not sufficiently well reported to analyse. Functional outcomes and health-related quality of life were not reported. In conclusion, high-quality randomised comparison of MV-Repair and MV-Replacement is urgently needed.
DOI: 10.3310/hta16060
2012
Cited 13 times
Verteporfin photodynamic therapy for neovascular age-related macular degeneration: cohort study for the UK.
The Health Technology Assessment (HTA) programme, part of the National Institute for Health Research (NIHR), was set up in 1993.It produces high-quality research information on the effectiveness, costs and broader impact of health technologies for those who use, manage and provide care in the NHS.'Health technologies' are broadly defined as all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care.The research findings from the HTA programme directly influence decision-making bodies such as the National Institute for Health and Clinical Excellence (NICE) and the National Screening Committee (NSC).HTA findings also help to improve the quality of clinical practice in the NHS indirectly in that they form a key component of the 'National Knowledge Service' .The HTA programme is needs led in that it fills gaps in the evidence needed by the NHS.There are three routes to the start of projects.First is the commissioned route.Suggestions for research are actively sought from people working in the NHS, from the public and consumer groups and from professional bodies such as royal colleges and NHS trusts.These suggestions are carefully prioritised by panels of independent experts (including NHS service users).The HTA programme then commissions the research by competitive tender.Second, the HTA programme provides grants for clinical trials for researchers who identify research questions.These are assessed for importance to patients and the NHS, and scientific rigour.Third, through its Technology Assessment Report (TAR) call-off contract, the HTA programme commissions bespoke reports, principally for NICE, but also for other policy-makers.TARs bring together evidence on the value of specific technologies.Some HTA research projects, including TARs, may take only months, others need several years.They can cost from as little as £40,000 to over £1 million, and may involve synthesising existing evidence, undertaking a trial, or other research collecting new data to answer a research problem.The final reports from HTA projects are peer reviewed by a number of independent expert
DOI: 10.14283/jpad.2014.35
2014
Cited 12 times
DEMENTIA PREVENTION: OPTIMIZING THE USE OF OBSERVATIONAL DATA FOR PERSONAL, CLINICAL, AND PUBLIC HEALTH DECISION-MAKING
Worldwide, over 35 million people suffer from Alzheimer's disease and related dementias. This number is expected to triple over the next 40 years. How can we improve the evidence supporting strategies to reduce the rate of dementia in future generations? The risk of dementia is likely influenced by modifiable factors such as exercise, cognitive activity, and the clinical management of diabetes and hypertension. However, the quality of evidence is limited and it remains unclear whether specific interventions to reduce these modifiable risk factors can, in turn, reduce the risk of dementia. Although randomized controlled trials are the gold-standard for causality, the majority of evidence for long-term dementia prevention derives from, and will likely continue to derive from, observational studies. Although observational research has some unavoidable limitations, its utility for dementia prevention might be improved by, for example, better distinction between confirmatory and exploratory research, higher reporting standards, investment in effectiveness research enabled by increased data-pooling, and standardized exposure and outcome measures. Informed decision-making by the general public on low-risk health choices that could have broad potential benefits could be enabled by internet-based tools and decision-aids to communicate the evidence, its quality, and the estimated magnitude of effect.
DOI: 10.1016/j.resuscitation.2020.09.026
2020
Cited 10 times
Randomized trial of the i-gel supraglottic airway device versus tracheal intubation during out of hospital cardiac arrest (AIRWAYS-2): Patient outcomes at three and six months
<h2>Abstract</h2><h3>Aim</h3> The AIRWAYS-2 cluster randomised controlled trial compared the i-gel supraglottic airway device (SGA) with tracheal intubation (TI) as the first advanced airway management (AAM) strategy used by Emergency Medical Service clinicians (paramedics) treating adult patients with non-traumatic out-of-hospital cardiac arrest (OHCA). It showed no difference between the two groups in the primary outcome of modified Rankin Scale (mRS) score at 30 days/hospital discharge. This paper reports outcomes to 6 months. <h3>Methods</h3> Paramedics from four ambulance services in England were randomised 1:1 to use an i-gel SGA (759 paramedics) or TI (764 paramedics) as their initial approach to AAM. Adults who had a non-traumatic OHCA and were attended by a participating paramedic were enrolled automatically under a waiver of consent. Survivors were invited to complete questionnaires at three and six months after OHCA. Outcomes were analysed using regression methods. <h3>Results</h3> 767/9296 (8.3%) enrolled patients survived to 30 days/hospital discharge and 317/767 survivors (41.3%) consented and were followed-up to six months. No significant differences were found between the two treatment groups in the primary outcome measure (mRS score: 3 months: odds ratio (OR) for good recovery (i-gel/TI, OR) 0.89, 95% CI 0.69–1.14; 6 months OR 0.91, 95% CI 0.71–1.16). EQ-5D-5L scores were also similar between groups and sensitivity analyses did not alter the findings. <h3>Conclusion</h3> There were no statistically significant differences between the TI and i-gel groups at three and six months. We therefore conclude that the initially reported finding of no significant difference between groups at 30 days/hospital discharge was sustained when the period of follow-up was extended to six months.
DOI: 10.1093/heapol/16.4.421
2001
Cited 23 times
The effects of different kinds of user fee on prescribing costs in rural Nepal
(1) To estimate the cost of irrational prescribing, and (2) to compare the effect of three different kinds of user fee on prescribing costs, in rural Nepal.A controlled before-after study was conducted in 33 government primary health care facilities in rural eastern Nepal during 1992-95. A fee per prescription (covering all drugs in whatever amounts) was regarded as the control against which two types of fee per drug item (covering a full course of treatment for each item) were compared. The average total cost to the patient for two drug items was the same in all fee systems. Total cost, expected cost (according to standard treatment guidelines) and wastage costs (total minus expected cost) per prescription were calculated from an average of 400 prescribing episodes per facility per year. The proportion of prescriptions conforming to standard treatment guidelines was calculated from 30 prescriptions per facility per year.20-52% of total drug costs were due to inappropriate drug prescription. A fee per drug item, as compared with a fee per prescription, was associated with (1) significantly fewer drug items prescribed per patient, (2) significantly lower drug costs per prescription, (3) significantly lower wastage due to inappropriate drug prescription, and (4) a significantly greater proportion of prescriptions conforming to standard treatment guidelines. Average drug cost per prescription (which was 24-33 Nepali rupees [NRs] across districts and time) was 5.7 NRs (95% confidence interval 1.0 to 10.4) and 9.3 NRs (95% confidence interval 4.8 to 13.8) less with the two different item fees, respectively, than with the fee per prescription.The economic consequences of irrational prescribing are severe, particularly in association with charging a fee per prescription. Item fees in the public sector reduce irrational prescribing and associated costs.
DOI: 10.1046/j.1475-1313.2000.00528.x
2000
Cited 22 times
Observer variability in optic disc assessment: implications for glaucoma shared care
Summary Demonstrating that optometrists can make valid and reliable assessments of optic disc features is an important prerequisite for establishing schemes for shared care/co‐management. Previous studies have estimated observer variability among experts in the assessment of optic disc cupping, but there has been a paucity of information on observer variability amongst optometrists. This paper describes a study to investigate intra‐ and inter‐observer variability for a range of disc features, as graded by both ophthalmologists and optometrists. Five observers (three optometrists and two ophthalmologists) graded 48 stereo‐pairs of optic disc photographs from 48 patients on two separate occasions. Each observer graded the following features: vertical and horizontal C / D ratios, narrowest rim width, the presence/absence of a disc haemorrhage, focal pallor of the neuroretinal rim, peri‐papillary atrophy, the steepness of the cup‐edge and the presence/absence of the cribriform sign. The average intra‐ and inter‐observer standard deviation (SD) of differences are, respectively, 0.11 and 0.19 for the vertical C / D ratios and 0.10 and 0.18 for the horizontal C / D ratios. For the vertical C / D ratio the average weighted kappa ( κ w ) is 0.79 within observers and 0.46 between observers. Percentage agreements for the presence/absence of a disc haemorrhage range from 96 to 100% (averageκ=0.92) within observers and from 90 to 98% (averageκ=0.77) between observers. For other disc features, average κ w values range from 0.67 to 0.71 within observers and from 0.23 to 0.46 between observers. Intra‐and inter‐observer comparisons (within and between different professionals) across all disc features are comparable for the optometrists and ophthalmologists, thus demonstrating that optometrists can make valid assessments of disc features. The implications for shared care are discussed.
DOI: 10.1016/j.ijcard.2011.09.059
2011
Cited 11 times
Prasugrel and bivalirudin for primary angioplasty: Early results on stent thrombosis and bleeding
Optimal treatment of STEMI relies on early mechanical reperfusion with PPCI [ [1] Wijns W. Kolh P. Danchin N. et al. Guidelines on myocardial revascularization: the Task Force on Myocardial Revascularization of the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS). Eur Heart J. 2010; 31: 2501-2555 Crossref PubMed Scopus (26) Google Scholar ]. Successful revascularisation requires effective adjunctive pharmacology, ideally with agents that provide rapid and reliable anti-platelet and anti-thrombotic effects. A delicate balance exists between effective inhibition of thrombosis and treatment-related bleeding. Incomplete suppression of platelet and thrombin activity may result in stent thrombosis.
DOI: 10.1002/bjs.10274
2016
Cited 9 times
Feasibility work to inform the design of a randomized clinical trial of wound dressings in elective and unplanned abdominal surgery
Abstract Background Designing RCTs in surgery requires consideration of existing evidence, stakeholders' views and emerging interventions, to ensure that research questions are relevant to patients, surgeons and the health service. When there is uncertainty about RCT design, feasibility work is recommended. This study aimed to assess how feasibility work could inform the design of a future pilot study and RCT (Bluebelle, HTA - 12/200/04). Methods This was a prospective survey of dressings used to cover abdominal wounds. Surgical trainees from 25 hospitals were invited to participate. Information on patient risk factors, operation type and type of wound dressings used was recorded for elective and unplanned abdominal procedures over a 2-week interval. The types of dressing used were summarized, and associations with operation type and patient risk factors explored. Results Twenty hospitals participated, providing data from 727 patients (1794 wounds). Wounds were predominantly covered with basic dressings (1203 of 1769, 68·0 per cent) and tissue adhesive was used in 27·4 per cent (485 of 1769); dressing type was missing for 25 wounds. Just 3·6 per cent of wounds (63 of 1769) did not have a dressing applied at the end of the procedure. There was no evidence of an association between type of dressing used and patient risk factors, type of operation, or elective and unscheduled surgery. Conclusion Based on the findings from this large study of current practice, the pilot study design has evolved. The inclusion criteria have expanded to encompass patients undergoing unscheduled surgery, and tissue adhesive as a dressing will be evaluated as an additional intervention group. Collaborative methods are recommended to inform the design of RCTs in surgery, helping to ensure they are relevant to current practice.
DOI: 10.1093/ejcts/ezad041
2023
Warm versus cold blood cardioplegia in paediatric congenital heart surgery: a randomized trial
Abstract OBJECTIVES Intermittent cold blood cardioplegia is commonly used in children, whereas intermittent warm blood cardioplegia is widely used in adults. We aimed to compare clinical and biochemical outcomes with these 2 methods. METHODS A single-centre, randomized controlled trial was conducted to compare the effectiveness of warm (≥34°C) versus cold (4–6°C) antegrade cardioplegia in children. The primary outcome was cardiac troponin T over the 1st 48 postoperative hours. Intensive care teams were blinded to group allocation. Outcomes were compared by intention-to-treat using linear mixed-effects, logistic or Cox regression. RESULTS 97 participants with median age of 1.2 years were randomized (49 to warm, 48 to cold cardioplegia); 59 participants (61%) had a risk-adjusted congenital heart surgery score of 3 or above. There were no deaths and 92 participants were followed to 3-months. Troponin release was similar in both groups [geometric mean ratio 1.07; 95% confidence interval (CI) 0.79–1.44; P = 0.66], as were other cardiac function measures (echocardiography, arterial and venous blood gases, vasoactive-inotrope score, arrhythmias). Intensive care stay was on average 14.6 h longer in the warm group (hazard ratio 0.52; 95% CI 0.34–0.79; P = 0.003), with a trend towards longer overall hospital stays (hazard ratio 0.66; 95% CI 0.43–1.02; P = 0.060) compared with the cold group. This could be related to more unplanned reoperations on bypass in the warm group compared to cold group (3 vs 1). CONCLUSIONS Warm blood cardioplegia is a safe and reproducible technique but does not provide superior myocardial protection in paediatric heart surgery.
DOI: 10.1046/j.1475-1313.2001.00569.x
2001
Cited 20 times
Randomised controlled trial of an integrated versus an optometric low vision rehabilitation service for patients with age‐related macular degeneration: study design and methodology
Summary A number of studies have measured the outcomes of low vision care but these have usually been longitudinal case series, thus constituting very low quality of evidence for effectiveness. To date, there have been no randomised controlled trials (RCTs) which have evaluated the effectiveness and cost effectiveness of different models of care in low vision. The size of the low vision population and the paucity of systematic evaluation have created a pressing need for evidence about cost‐effectiveness in order to inform service developments for low vision rehabilitation. This paper describes the study design and methodology of a three‐arm RCT currently under way in Manchester. The baseline population recruited is also described. A traditional hospital‐based optometric service is being compared with an integrated service (comprising the addition of community‐based rehabilitation officer input) and with more generic community input (which is non‐integrated and is not vision specific). A wide range of outcome measures are being assessed at recruitment and 12 months post‐intervention, including low vision specific and generic quality of life measures, patterns of low vision aid use, and task performance. The rationale for the trial is discussed and the main study outcomes are described.