ϟ

Daniel Talmor

Here are all the papers by Daniel Talmor that you can download and read on OA.mg.
Daniel Talmor’s last known institution is . Download Daniel Talmor PDFs here.

Claim this Profile →
DOI: 10.1056/nejmsa1410639
2015
Cited 1,969 times
Driving Pressure and Survival in the Acute Respiratory Distress Syndrome
Mechanical-ventilation strategies that use lower end-inspiratory (plateau) airway pressures, lower tidal volumes (VT), and higher positive end-expiratory pressures (PEEPs) can improve survival in patients with the acute respiratory distress syndrome (ARDS), but the relative importance of each of these components is uncertain. Because respiratory-system compliance (CRS) is strongly related to the volume of aerated remaining functional lung during disease (termed functional lung size), we hypothesized that driving pressure (ΔP=VT/CRS), in which VT is intrinsically normalized to functional lung size (instead of predicted lung size in healthy persons), would be an index more strongly associated with survival than VT or PEEP in patients who are not actively breathing.Using a statistical tool known as multilevel mediation analysis to analyze individual data from 3562 patients with ARDS enrolled in nine previously reported randomized trials, we examined ΔP as an independent variable associated with survival. In the mediation analysis, we estimated the isolated effects of changes in ΔP resulting from randomized ventilator settings while minimizing confounding due to the baseline severity of lung disease.Among ventilation variables, ΔP was most strongly associated with survival. A 1-SD increment in ΔP (approximately 7 cm of water) was associated with increased mortality (relative risk, 1.41; 95% confidence interval [CI], 1.31 to 1.51; P<0.001), even in patients receiving "protective" plateau pressures and VT (relative risk, 1.36; 95% CI, 1.17 to 1.58; P<0.001). Individual changes in VT or PEEP after randomization were not independently associated with survival; they were associated only if they were among the changes that led to reductions in ΔP (mediation effects of ΔP, P=0.004 and P=0.001, respectively).We found that ΔP was the ventilation variable that best stratified risk. Decreases in ΔP owing to changes in ventilator settings were strongly associated with increased survival. (Funded by Fundação de Amparo e Pesquisa do Estado de São Paulo and others.).
DOI: 10.1001/jama.2010.218
2010
Cited 1,269 times
Higher vs Lower Positive End-Expiratory Pressure in Patients With Acute Lung Injury and Acute Respiratory Distress Syndrome
Trials comparing higher vs lower levels of positive end-expiratory pressure (PEEP) in adults with acute lung injury or acute respiratory distress syndrome (ARDS) have been underpowered to detect small but potentially important effects on mortality or to explore subgroup differences.To evaluate the association of higher vs lower PEEP with patient-important outcomes in adults with acute lung injury or ARDS who are receiving ventilation with low tidal volumes and to investigate whether these associations differ across prespecified subgroups.Search of MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials (1996-January 2010) plus a hand search of conference proceedings (2004-January 2010).Two reviewers independently screened articles to identify studies randomly assigning adults with acute lung injury or ARDS to treatment with higher vs lower PEEP (with low tidal volume ventilation) and also reporting mortality.Data from 2299 individual patients in 3 trials were analyzed using uniform outcome definitions. Prespecified effect modifiers were tested using multivariable hierarchical regression, adjusting for important prognostic factors and clustering effects.There were 374 hospital deaths in 1136 patients (32.9%) assigned to treatment with higher PEEP and 409 hospital deaths in 1163 patients (35.2%) assigned to lower PEEP (adjusted relative risk [RR], 0.94; 95% confidence interval [CI], 0.86-1.04; P = .25). Treatment effects varied with the presence or absence of ARDS, defined by a value of 200 mm Hg or less for the ratio of partial pressure of oxygen to fraction of inspired oxygen concentration (P = .02 for interaction). In patients with ARDS (n = 1892), there were 324 hospital deaths (34.1%) in the higher PEEP group and 368 (39.1%) in the lower PEEP group (adjusted RR, 0.90; 95% CI, 0.81-1.00; P = .049); in patients without ARDS (n = 404), there were 50 hospital deaths (27.2%) in the higher PEEP group and 44 (19.4%) in the lower PEEP group (adjusted RR, 1.37; 95% CI, 0.98-1.92; P = .07). Rates of pneumothorax and vasopressor use were similar.Treatment with higher vs lower levels of PEEP was not associated with improved hospital survival. However, higher levels were associated with improved survival among the subgroup of patients with ARDS.
DOI: 10.1164/rccm.201703-0548st
2017
Cited 1,127 times
An Official American Thoracic Society/European Society of Intensive Care Medicine/Society of Critical Care Medicine Clinical Practice Guideline: Mechanical Ventilation in Adult Patients with Acute Respiratory Distress Syndrome
This document provides evidence-based clinical practice guidelines on the use of mechanical ventilation in adult patients with acute respiratory distress syndrome (ARDS).A multidisciplinary panel conducted systematic reviews and metaanalyses of the relevant research and applied Grading of Recommendations, Assessment, Development, and Evaluation methodology for clinical recommendations.For all patients with ARDS, the recommendation is strong for mechanical ventilation using lower tidal volumes (4-8 ml/kg predicted body weight) and lower inspiratory pressures (plateau pressure < 30 cm H2O) (moderate confidence in effect estimates). For patients with severe ARDS, the recommendation is strong for prone positioning for more than 12 h/d (moderate confidence in effect estimates). For patients with moderate or severe ARDS, the recommendation is strong against routine use of high-frequency oscillatory ventilation (high confidence in effect estimates) and conditional for higher positive end-expiratory pressure (moderate confidence in effect estimates) and recruitment maneuvers (low confidence in effect estimates). Additional evidence is necessary to make a definitive recommendation for or against the use of extracorporeal membrane oxygenation in patients with severe ARDS.The panel formulated and provided the rationale for recommendations on selected ventilatory interventions for adult patients with ARDS. Clinicians managing patients with ARDS should personalize decisions for their patients, particularly regarding the conditional recommendations in this guideline.
DOI: 10.1503/cmaj.090206
2009
Cited 1,003 times
Intensive insulin therapy and mortality among critically ill patients: a meta-analysis including NICE-SUGAR study data
Hyperglycemia is associated with increased mortality in critically ill patients. Randomized trials of intensive insulin therapy have reported inconsistent effects on mortality and increased rates of severe hypoglycemia. We conducted a meta-analysis to update the totality of evidence regarding the influence of intensive insulin therapy compared with conventional insulin therapy on mortality and severe hypoglycemia in the intensive care unit (ICU).We conducted searches of electronic databases, abstracts from scientific conferences and bibliographies of relevant articles. We included published randomized controlled trials conducted in the ICU that directly compared intensive insulin therapy with conventional glucose management and that documented mortality. We included in our meta-analysis the data from the recent NICE-SUGAR (Normoglycemia in Intensive Care Evaluation - Survival Using Glucose Algorithm Regulation) study.We included 26 trials involving a total of 13 567 patients in our meta-analysis. Among the 26 trials that reported mortality, the pooled relative risk (RR) of death with intensive insulin therapy compared with conventional therapy was 0.93 (95% confidence interval [CI] 0.83-1.04). Among the 14 trials that reported hypoglycemia, the pooled RR with intensive insulin therapy was 6.0 (95% CI 4.5-8.0). The ICU setting was a contributing factor, with patients in surgical ICUs appearing to benefit from intensive insulin therapy (RR 0.63, 95% CI 0.44-0.91); patients in the other ICU settings did not (medical ICU: RR 1.0, 95% CI 0.78-1.28; mixed ICU: RR 0.99, 95% CI 0.86-1.12). The different targets of intensive insulin therapy (glucose level < or = 6.1 mmol/L v. < or = 8.3 mmol/L) did not influence either mortality or risk of hypoglycemia.Intensive insulin therapy significantly increased the risk of hypoglycemia and conferred no overall mortality benefit among critically ill patients. However, this therapy may be beneficial to patients admitted to a surgical ICU.
DOI: 10.1056/nejmoa0708638
2008
Cited 963 times
Mechanical Ventilation Guided by Esophageal Pressure in Acute Lung Injury
Survival of patients with acute lung injury or the acute respiratory distress syndrome (ARDS) has been improved by ventilation with small tidal volumes and the use of positive end-expiratory pressure (PEEP); however, the optimal level of PEEP has been difficult to determine. In this pilot study, we estimated transpulmonary pressure with the use of esophageal balloon catheters. We reasoned that the use of pleural-pressure measurements, despite the technical limitations to the accuracy of such measurements, would enable us to find a PEEP value that could maintain oxygenation while preventing lung injury due to repeated alveolar collapse or overdistention.We randomly assigned patients with acute lung injury or ARDS to undergo mechanical ventilation with PEEP adjusted according to measurements of esophageal pressure (the esophageal-pressure-guided group) or according to the Acute Respiratory Distress Syndrome Network standard-of-care recommendations (the control group). The primary end point was improvement in oxygenation. The secondary end points included respiratory-system compliance and patient outcomes.The study reached its stopping criterion and was terminated after 61 patients had been enrolled. The ratio of the partial pressure of arterial oxygen to the fraction of inspired oxygen at 72 hours was 88 mm Hg higher in the esophageal-pressure-guided group than in the control group (95% confidence interval, 78.1 to 98.3; P=0.002). This effect was persistent over the entire follow-up time (at 24, 48, and 72 hours; P=0.001 by repeated-measures analysis of variance). Respiratory-system compliance was also significantly better at 24, 48, and 72 hours in the esophageal-pressure-guided group (P=0.01 by repeated-measures analysis of variance).As compared with the current standard of care, a ventilator strategy using esophageal pressures to estimate the transpulmonary pressure significantly improves oxygenation and compliance. Multicenter clinical trials are needed to determine whether this approach should be widely adopted. (ClinicalTrials.gov number, NCT00127491.)
DOI: 10.1016/j.annemergmed.2004.12.006
2005
Cited 632 times
Serum Lactate as a Predictor of Mortality in Emergency Department Patients with Infection
Little is known about risk-stratification biomarkers in emergency department (ED) patients with suspected infection, and lactate is a biologically plausible candidate. We determine whether a serum venous lactate is associated with an increased risk of death in ED patients with infection.This was a prospective cohort study in an urban, academic medical center with 50,000 annual ED visits. A total of 1,278 consecutive patient visits met enrollment criteria between July 24, 2003, and March 24, 2004, and all patients were enrolled. Inclusion criteria were age 18 years or older, serum lactate level obtained, and admission to the hospital with an infection-related diagnosis. The main outcome measure was all-cause 28-day inhospital mortality and death within 3 days of presentation.Among 1,278 patient visits, there were 105 (8.2%) deaths during hospitalization, with 55 (4.3%) of 1,278 deaths occurring in the first 3 days. Mortality rates increased as lactate increased: 43 (4.9%) of 877 of patients with a lactate level between 0 and 2.5 mmol/L died, 24 (9.0%) of 267 patients with a lactate level between 2.5 and 4.0 mmol/L died, and 38 (28.4%) of 134 patients with a lactate level greater than or equal to 4.0 mmol/L died. Lactate level greater than or equal to 4.0 mmol/L was 36% (95% confidence interval [CI] 27% to 45%) sensitive and 92% (95% CI 90% to 93%) specific for any death; it was 55% (95% CI 41% to 68%) sensitive and 91% (95% CI 90% to 93%) specific for death within 3 days.In this cohort of ED patients with signs and symptoms suggestive of infection, our results support serum venous lactate level as a promising risk-stratification tool. Multicenter validation, as well as comparison of the lactate level with clinical predictors, needs to be done before widespread implementation.
DOI: 10.1164/rccm.201004-0549oc
2011
Cited 513 times
Early Identification of Patients at Risk of Acute Lung Injury
Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies.To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS).In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions.Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1-4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78-0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9-5.7).ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772).
DOI: 10.1016/s0140-6736(16)31637-3
2016
Cited 512 times
Early, goal-directed mobilisation in the surgical intensive care unit: a randomised controlled trial
Background Immobilisation predicts adverse outcomes in patients in the surgical intensive care unit (SICU). Attempts to mobilise critically ill patients early after surgery are frequently restricted, but we tested whether early mobilisation leads to improved mobility, decreased SICU length of stay, and increased functional independence of patients at hospital discharge. Methods We did a multicentre, international, parallel-group, assessor-blinded, randomised controlled trial in SICUs of five university hospitals in Austria (n=1), Germany (n=1), and the USA (n=3). Eligible patients (aged 18 years or older, who had been mechanically ventilated for <48 h, and were expected to require mechanical ventilation for ≥24 h) were randomly assigned (1:1) by use of a stratified block randomisation via restricted web platform to standard of care (control) or early, goal-directed mobilisation using an inter-professional approach of closed-loop communication and the SICU optimal mobilisation score (SOMS) algorithm (intervention), which describes patients’ mobilisation capacity on a numerical rating scale ranging from 0 (no mobilisation) to 4 (ambulation). We had three main outcomes hierarchically tested in a prespecified order: the mean SOMS level patients achieved during their SICU stay (primary outcome), and patient's length of stay on SICU and the mini-modified functional independence measure score (mmFIM) at hospital discharge (both secondary outcomes). This trial is registered with ClinicalTrials.gov (NCT01363102). Findings Between July 1, 2011, and Nov 4, 2015, we randomly assigned 200 patients to receive standard treatment (control; n=96) or intervention (n=104). Intention-to-treat analysis showed that the intervention improved the mobilisation level (mean achieved SOMS 2·2 [SD 1·0] in intervention group vs 1·5 [0·8] in control group, p<0·0001), decreased SICU length of stay (mean 7 days [SD 5–12] in intervention group vs 10 days [6–15] in control group, p=0·0054), and improved functional mobility at hospital discharge (mmFIM score 8 [4–8] in intervention group vs 5 [2–8] in control group, p=0·0002). More adverse events were reported in the intervention group (25 cases [2·8%]) than in the control group (ten cases [0·8%]); no serious adverse events were observed. Before hospital discharge 25 patients died (17 [16%] in the intervention group, eight [8%] in the control group). 3 months after hospital discharge 36 patients died (21 [22%] in the intervention group, 15 [17%] in the control group). Interpretation Early, goal-directed mobilisation improved patient mobilisation throughout SICU admission, shortened patient length of stay in the SICU, and improved patients’ functional mobility at hospital discharge. Funding Jeffrey and Judy Buzen.
DOI: 10.1164/rccm.201312-2193ci
2014
Cited 456 times
The Application of Esophageal Pressure Measurement in Patients with Respiratory Failure
Section:ChooseTop of pageAbstract <<Physiological BackgroundClinical Use of Esophagea...Clinical Use of Esophagea...TechniqueConclusionsReferencesCITING ARTICLES
DOI: 10.1016/s2213-2600(16)00057-6
2016
Cited 420 times
Association between driving pressure and development of postoperative pulmonary complications in patients undergoing mechanical ventilation for general anaesthesia: a meta-analysis of individual patient data
Protective mechanical ventilation strategies using low tidal volume or high levels of positive end-expiratory pressure (PEEP) improve outcomes for patients who have had surgery. The role of the driving pressure, which is the difference between the plateau pressure and the level of positive end-expiratory pressure is not known. We investigated the association of tidal volume, the level of PEEP, and driving pressure during intraoperative ventilation with the development of postoperative pulmonary complications.We did a meta-analysis of individual patient data from randomised controlled trials of protective ventilation during general anesthaesia for surgery published up to July 30, 2015. The main outcome was development of postoperative pulmonary complications (postoperative lung injury, pulmonary infection, or barotrauma).We included data from 17 randomised controlled trials, including 2250 patients. Multivariate analysis suggested that driving pressure was associated with the development of postoperative pulmonary complications (odds ratio [OR] for one unit increase of driving pressure 1·16, 95% CI 1·13-1·19; p<0·0001), whereas we detected no association for tidal volume (1·05, 0·98-1·13; p=0·179). PEEP did not have a large enough effect in univariate analysis to warrant inclusion in the multivariate analysis. In a mediator analysis, driving pressure was the only significant mediator of the effects of protective ventilation on development of pulmonary complications (p=0·027). In two studies that compared low with high PEEP during low tidal volume ventilation, an increase in the level of PEEP that resulted in an increase in driving pressure was associated with more postoperative pulmonary complications (OR 3·11, 95% CI 1·39-6·96; p=0·006).In patients having surgery, intraoperative high driving pressure and changes in the level of PEEP that result in an increase of driving pressure are associated with more postoperative pulmonary complications. However, a randomised controlled trial comparing ventilation based on driving pressure with usual care is needed to confirm these findings.None.
DOI: 10.1001/archinternmed.2010.89
2010
Cited 401 times
Iatrogenic Gastric Acid Suppression and the Risk of Nosocomial Clostridium difficile Infection
The incidence and severity of Clostridium difficile infections are increasing. Acid-suppressive therapy has been suggested as a risk factor for C difficile, but this remains controversial.We conducted a pharmacoepidemiologic cohort study, performing a secondary analysis of data collected prospectively on 101 796 discharges from a tertiary care medical center during a 5-year period. The primary exposure of interest was acid suppression therapy, classified by the most intense acid suppression therapy received (no acid suppression, histamine(2)-receptor antagonist [H(2)RA] therapy, daily proton pump inhibitor [PPI], and PPI more frequently than daily).As the level of acid suppression increased, the risk of nosocomial C difficile infection increased, from 0.3% (95% confidence interval [CI], 0.21%-0.31%) in patients not receiving acid suppressive therapy to 0.6% (95% CI, 0.49%-0.79%) in those receiving H(2)RA therapy, to 0.9% (95% CI, 0.80%-0.98%) in those receiving daily PPI treatment, and to 1.4% (1.15%-1.71%) in those receiving more frequent PPI therapy. After adjustment for comorbid conditions, age, antibiotics, and propensity score-based likelihood of receipt of acid-suppression therapy, the association persisted, increasing from an odds ratio of 1 (no acid suppression [reference]) to 1.53 (95% CI, 1.12-2.10) (H(2)RA), to 1.74 (95% CI, 1.39-2.18) (daily PPI), and to 2.36 (95% CI, 1.79-3.11) (more frequent PPI). Similar estimates were found with a matched cohort analysis and with nested case-control techniques.Increasing levels of pharmacologic acid suppression are associated with increased risks of nosocomial C difficile infection. This evidence of a dose-response effect provides further support for the potentially causal nature of iatrogenic acid suppression in the development of nosocomial C difficile infection.
DOI: 10.1007/s00134-016-4400-x
2016
Cited 365 times
Esophageal and transpulmonary pressure in the clinical setting: meaning, usefulness and perspectives
Esophageal pressure (Pes) is a minimally invasive advanced respiratory monitoring method with the potential to guide management of ventilation support and enhance specific diagnoses in acute respiratory failure patients. To date, the use of Pes in the clinical setting is limited, and it is often seen as a research tool only.This is a review of the relevant technical, physiological and clinical details that support the clinical utility of Pes.After appropriately positioning of the esophageal balloon, Pes monitoring allows titration of controlled and assisted mechanical ventilation to achieve personalized protective settings and the desired level of patient effort from the acute phase through to weaning. Moreover, Pes monitoring permits accurate measurement of transmural vascular pressure and intrinsic positive end-expiratory pressure and facilitates detection of patient-ventilator asynchrony, thereby supporting specific diagnoses and interventions. Finally, some Pes-derived measures may also be obtained by monitoring electrical activity of the diaphragm.Pes monitoring provides unique bedside measures for a better understanding of the pathophysiology of acute respiratory failure patients. Including Pes monitoring in the intensivist's clinical armamentarium may enhance treatment to improve clinical outcomes.
DOI: 10.1097/01.ccm.0000206104.18647.a8
2006
Cited 350 times
Implementation and outcomes of the Multiple Urgent Sepsis Therapies (MUST) protocol*
Objectives: To describe the effectiveness of a comprehensive, interdisciplinary sepsis treatment protocol with regard to both implementation and outcomes and to compare the mortality rates and therapies of patients with septic shock with similar historical controls. Design: Prospective, interventional cohort study with a historical control comparison group. Setting: Urban, tertiary care, university hospital with 46,000 emergency department visits and 4,100 intensive care unit admissions annually. Patients: Inclusion criteria were a) emergency department patients aged ≥18 yrs, b) suspected infection, and c) lactate of >4 mmol/L or septic shock. Exclusion criteria were a) emergent operation, b) prehospital cardiac arrest, and c) comfort measures only. Time period: protocol, November 10, 2003, through November 9, 2004; historical controls, February 1, 2000, through January 31, 2001. Intervention: A sepsis treatment pathway incorporating empirical antibiotics, early goal-directed therapy, drotrecogin alfa, steroids, intensive insulin therapy, and lung-protective ventilation. Measurements and Main Results: There were 116 protocol patients, with a mortality rate of 18% (11–25%), of which 79 patients had septic shock. Comparing these patients with 51 historical controls, protocol patients received more fluid (4.0 vs. 2.5 L crystalloid, p < .001), earlier antibiotics (90 vs. 120 mins, p < .013), more appropriate empirical coverage (97% vs. 88%, p < .05), more vasopressors in the first 6 hrs (80% vs. 45%, p < .001), tighter glucose control (mean morning glucose, 123 vs. 140, p < .001), and more frequent assessment of adrenal function (82% vs. 10%, p < .001), with a nonstatistically significant increase in dobutamine use (14% vs. 4%, p = .06) and red blood cell transfusions (30% vs. 18%, p = .07) in the first 24 hrs. For protocol patients with septic shock, 28-day in-hospital mortality was 20.3% compared with 29.4% for historical controls (p = .3). Conclusions: Clinical implementation of a comprehensive sepsis treatment protocol is feasible and is associated with changes in therapies such as time to antibiotics, intravenous fluid delivery, and vasopressor use in the first 6 hrs. No statistically significant decrease in mortality was demonstrated, as this trial was not sufficiently powered to assess mortality benefits.
DOI: 10.1007/s00134-007-0680-5
2007
Cited 320 times
Occult hypoperfusion and mortality in patients with suspected infection
DOI: 10.1164/rccm.201503-0584oc
2016
Cited 311 times
Hospital Incidence and Outcomes of the Acute Respiratory Distress Syndrome Using the Kigali Modification of the Berlin Definition
Estimates of the incidence of the acute respiratory distress syndrome (ARDS) in high- and middle-income countries vary from 10.1 to 86.2 per 100,000 person-years in the general population. The epidemiology of ARDS has not been reported for a low-income country at the level of the population, hospital, or intensive care unit (ICU). The Berlin definition may not allow identification of ARDS in resource-constrained settings.To estimate the incidence and outcomes of ARDS at a Rwandan referral hospital using the Kigali modification of the Berlin definition: without requirement for positive end-expiratory pressure, hypoxia cutoff of SpO2/FiO2 less than or equal to 315, and bilateral opacities on lung ultrasound or chest radiograph.We screened every adult patient for hypoxia at a public referral hospital in Rwanda for 6 weeks. For every patient with hypoxia, we collected data on demographics and ARDS risk factors, performed lung ultrasonography, and evaluated chest radiography when available.Forty-two (4.0%) of 1,046 hospital admissions met criteria for ARDS. Using various prespecified cutoffs for the SpO2/FiO2 ratio resulted in almost identical hospital incidence values. Median age for patients with ARDS was 37 years, and infection was the most common risk factor (44.1%). Only 30.9% of patients with ARDS were admitted to an ICU, and hospital mortality was 50.0%. Using traditional Berlin criteria, no patients would have met criteria for ARDS.ARDS seems to be a common and fatal syndrome in a hospital in Rwanda, with few patients admitted to an ICU. The Berlin definition is likely to underestimate the impact of ARDS in low-income countries, where resources to meet the definition requirements are lacking. Although the Kigali modification requires validation before widespread use, we hope this study stimulates further work in refining an ARDS definition that can be consistently used in all settings.
DOI: 10.1001/jama.2019.0555
2019
Cited 293 times
Effect of Titrating Positive End-Expiratory Pressure (PEEP) With an Esophageal Pressure–Guided Strategy vs an Empirical High PEEP-F<scp>io</scp><sub>2</sub> Strategy on Death and Days Free From Mechanical Ventilation Among Patients With Acute Respiratory Distress Syndrome
Adjusting positive end-expiratory pressure (PEEP) to offset pleural pressure might attenuate lung injury and improve patient outcomes in acute respiratory distress syndrome (ARDS).To determine whether PEEP titration guided by esophageal pressure (PES), an estimate of pleural pressure, was more effective than empirical high PEEP-fraction of inspired oxygen (Fio2) in moderate to severe ARDS.Phase 2 randomized clinical trial conducted at 14 hospitals in North America. Two hundred mechanically ventilated patients aged 16 years and older with moderate to severe ARDS (Pao2:Fio2 ≤200 mm Hg) were enrolled between October 31, 2012, and September 14, 2017; long-term follow-up was completed July 30, 2018.Participants were randomized to PES-guided PEEP (n = 102) or empirical high PEEP-Fio2 (n = 98). All participants received low tidal volumes.The primary outcome was a ranked composite score incorporating death and days free from mechanical ventilation among survivors through day 28. Prespecified secondary outcomes included 28-day mortality, days free from mechanical ventilation among survivors, and need for rescue therapy.Two hundred patients were enrolled (mean [SD] age, 56 [16] years; 46% female) and completed 28-day follow-up. The primary composite end point was not significantly different between treatment groups (probability of more favorable outcome with PES-guided PEEP: 49.6% [95% CI, 41.7% to 57.5%]; P = .92). At 28 days, 33 of 102 patients (32.4%) assigned to PES-guided PEEP and 30 of 98 patients (30.6%) assigned to empirical PEEP-Fio2 died (risk difference, 1.7% [95% CI, -11.1% to 14.6%]; P = .88). Days free from mechanical ventilation among survivors was not significantly different (median [interquartile range]: 22 [15-24] vs 21 [16.5-24] days; median difference, 0 [95% CI, -1 to 2] days; P = .85). Patients assigned to PES-guided PEEP were significantly less likely to receive rescue therapy (4/102 [3.9%] vs 12/98 [12.2%]; risk difference, -8.3% [95% CI, -15.8% to -0.8%]; P = .04). None of the 7 other prespecified secondary clinical end points were significantly different. Adverse events included gross barotrauma, which occurred in 6 patients with PES-guided PEEP and 5 patients with empirical PEEP-Fio2.Among patients with moderate to severe ARDS, PES-guided PEEP, compared with empirical high PEEP-Fio2, resulted in no significant difference in death and days free from mechanical ventilation. These findings do not support PES-guided PEEP titration in ARDS.ClinicalTrials.gov Identifier NCT01681225.
DOI: 10.1097/aln.0000000000000706
2015
Cited 284 times
Protective <i>versus</i> Conventional Ventilation for Surgery
Recent studies show that intraoperative mechanical ventilation using low tidal volumes (VT) can prevent postoperative pulmonary complications (PPCs). The aim of this individual patient data meta-analysis is to evaluate the individual associations between VT size and positive end-expiratory pressure (PEEP) level and occurrence of PPC.Randomized controlled trials comparing protective ventilation (low VT with or without high levels of PEEP) and conventional ventilation (high VT with low PEEP) in patients undergoing general surgery. The primary outcome was development of PPC. Predefined prognostic factors were tested using multivariate logistic regression.Fifteen randomized controlled trials were included (2,127 patients). There were 97 cases of PPC in 1,118 patients (8.7%) assigned to protective ventilation and 148 cases in 1,009 patients (14.7%) assigned to conventional ventilation (adjusted relative risk, 0.64; 95% CI, 0.46 to 0.88; P < 0.01). There were 85 cases of PPC in 957 patients (8.9%) assigned to ventilation with low VT and high PEEP levels and 63 cases in 525 patients (12%) assigned to ventilation with low VT and low PEEP levels (adjusted relative risk, 0.93; 95% CI, 0.64 to 1.37; P = 0.72). A dose-response relationship was found between the appearance of PPC and VT size (R2 = 0.39) but not between the appearance of PPC and PEEP level (R2 = 0.08).These data support the beneficial effects of ventilation with use of low VT in patients undergoing surgery. Further trials are necessary to define the role of intraoperative higher PEEP to prevent PPC during nonopen abdominal surgery.
DOI: 10.1007/s00134-012-2629-6
2012
Cited 219 times
ICU admission characteristics and mortality rates among elderly and very elderly patients
The effect of advanced age per se versus severity of chronic and acute diseases on the short- and long-term survival of older patients admitted to the intensive care unit (ICU) remains unclear.Intensive care unit admissions to the surgical ICU and medical ICU of patients older than 65 years were analyzed. Patients were divided into three age groups: 65-74, 75-84, and 85 and above. The primary endpoints were 28-day and 1-year mortality.The analysis focused on 7,265 patients above the age of 65, representing 45.7 % of the total ICU population. From the first to third age group there was increased prevalence of heart failure (25.9-40.3 %), cardiac arrhythmia (24.6-43.5 %), and valvular heart disease (7.5-15.8 %). There was reduced prevalence of diabetes complications (7.5-2.4 %), alcohol abuse (4.1-0.6 %), chronic obstructive pulmonary disease (COPD) (24.4-17.4 %), and liver failure (5.0-1.0 %). Logistic regression analysis adjusted for gender, sequential organ failure assessment, do not resuscitate, and Elixhauser score found that patients from the second and third age group had odds ratios of 1.38 [95 % confidence interval (CI) 1.19-1.59] and 1.53 (95 % CI 1.29-1.81) for 28-day mortality as compared with the first age group. Cox regression analysis for 1-year mortality in all populations and in 28-day survivors showed the same trend.The proportion of elderly patients from the total ICU population is high. With advancing age, the proportion of various preexisting comorbidities and the primary reason for ICU admission change. Advanced age should be regarded as a significant independent risk factor for mortality, especially for ICU patients older than 75.
DOI: 10.1097/ccm.0000000000001189
2015
Cited 206 times
Lung-Protective Ventilation With Low Tidal Volumes and the Occurrence of Pulmonary Complications in Patients Without Acute Respiratory Distress Syndrome
Protective mechanical ventilation with low tidal volumes is standard of care for patients with acute respiratory distress syndrome. The aim of this individual patient data analysis was to determine the association between tidal volume and the occurrence of pulmonary complications in ICU patients without acute respiratory distress syndrome and the association between occurrence of pulmonary complications and outcome in these patients.Individual patient data analysis.ICU patients not fulfilling the consensus criteria for acute respiratory distress syndrome at the onset of ventilation.Mechanical ventilation with low tidal volume.The primary endpoint was development of a composite of acute respiratory distress syndrome and pneumonia during hospital stay. Based on the tertiles of tidal volume size in the first 2 days of ventilation, patients were assigned to a "low tidal volume group" (tidal volumes ≤ 7 mL/kg predicted body weight), an "intermediate tidal volume group" (> 7 and < 10 mL/kg predicted body weight), and a "high tidal volume group" (≥ 10 mL/kg predicted body weight). Seven investigations (2,184 patients) were included. Acute respiratory distress syndrome or pneumonia occurred in 23% of patients in the low tidal volume group, in 28% of patients in the intermediate tidal volume group, and in 31% of the patients in the high tidal volume group (adjusted odds ratio [low vs high tidal volume group], 0.72; 95% CI, 0.52-0.98; p = 0.042). Occurrence of pulmonary complications was associated with a lower number of ICU-free and hospital-free days and alive at day 28 (10.0 ± 10.9 vs 13.8 ± 11.6 d; p < 0.01 and 6.1 ± 8.1 vs 8.9 ± 9.4 d; p < 0.01) and an increased hospital mortality (49.5% vs 35.6%; p < 0.01).Ventilation with low tidal volumes is associated with a lower risk of development of pulmonary complications in patients without acute respiratory distress syndrome.
DOI: 10.1001/jama.2019.0234
2019
Cited 204 times
Effect of Intravenous Acetaminophen vs Placebo Combined With Propofol or Dexmedetomidine on Postoperative Delirium Among Older Patients Following Cardiac Surgery
Postoperative delirium is common following cardiac surgery and may be affected by choice of analgesic and sedative.To evaluate the effect of postoperative intravenous (IV) acetaminophen (paracetamol) vs placebo combined with IV propofol vs dexmedetomidine on postoperative delirium among older patients undergoing cardiac surgery.Randomized, placebo-controlled, factorial clinical trial among 120 patients aged 60 years or older undergoing on-pump coronary artery bypass graft (CABG) surgery or combined CABG/valve surgeries at a US center. Enrollment was September 2015 to April 2018, with follow-up ending in April 2019.Patients were randomized to 1 of 4 groups receiving postoperative analgesia with IV acetaminophen or placebo every 6 hours for 48 hours and postoperative sedation with dexmedetomidine or propofol starting at chest closure and continued for up to 6 hours (acetaminophen and dexmedetomidine: n = 29; placebo and dexmedetomidine: n = 30; acetaminophen and propofol: n = 31; placebo and propofol: n = 30).The primary outcome was incidence of postoperative in-hospital delirium by the Confusion Assessment Method. Secondary outcomes included delirium duration, cognitive decline, breakthrough analgesia within the first 48 hours, and ICU and hospital length of stay.Among 121 patients randomized (median age, 69 years; 19 women [15.8%]), 120 completed the trial. Patients treated with IV acetaminophen had a significant reduction in delirium (10% vs 28% placebo; difference, -18% [95% CI, -32% to -5%]; P = .01; HR, 2.8 [95% CI, 1.1-7.8]). Patients receiving dexmedetomidine vs propofol had no significant difference in delirium (17% vs 21%; difference, -4% [95% CI, -18% to 10%]; P = .54; HR, 0.8 [95% CI, 0.4-1.9]). There were significant differences favoring acetaminophen vs placebo for 3 prespecified secondary outcomes: delirium duration (median, 1 vs 2 days; difference, -1 [95% CI, -2 to 0]), ICU length of stay (median, 29.5 vs 46.7 hours; difference, -16.7 [95% CI, -20.3 to -0.8]), and breakthrough analgesia (median, 322.5 vs 405.3 µg morphine equivalents; difference, -83 [95% CI, -154 to -14]). For dexmedetomidine vs propofol, only breakthrough analgesia was significantly different (median, 328.8 vs 397.5 µg; difference, -69 [95% CI, -155 to -4]; P = .04). Fourteen patients in both the placebo-dexmedetomidine and acetaminophen-propofol groups (46% and 45%) and 7 in the acetaminophen-dexmedetomidine and placebo-propofol groups (24% and 23%) had hypotension.Among older patients undergoing cardiac surgery, postoperative scheduled IV acetaminophen, combined with IV propofol or dexmedetomidine, reduced in-hospital delirium vs placebo. Additional research, including comparison of IV vs oral acetaminophen and other potentially opioid-sparing analgesics, on the incidence of postoperative delirium is warranted.ClinicalTrials.gov Identifier: NCT02546765.
DOI: 10.1007/s00134-020-06312-y
2020
Cited 203 times
The role for high flow nasal cannula as a respiratory support strategy in adults: a clinical practice guideline
High flow nasal cannula (HFNC) is a relatively recent respiratory support technique which delivers high flow, heated and humidified controlled concentration of oxygen via the nasal route. Recently, its use has increased for a variety of clinical indications. To guide clinical practice, we developed evidence-based recommendations regarding use of HFNC in various clinical settings. We formed a guideline panel composed of clinicians, methodologists and experts in respiratory medicine. Using GRADE, the panel developed recommendations for four actionable questions. The guideline panel made a strong recommendation for HFNC in hypoxemic respiratory failure compared to conventional oxygen therapy (COT) (moderate certainty), a conditional recommendation for HFNC following extubation (moderate certainty), no recommendation regarding HFNC in the peri-intubation period (moderate certainty), and a conditional recommendation for postoperative HFNC in high risk and/or obese patients following cardiac or thoracic surgery (moderate certainty). This clinical practice guideline synthesizes current best-evidence into four recommendations for HFNC use in patients with hypoxemic respiratory failure, following extubation, in the peri-intubation period, and postoperatively for bedside clinicians.
DOI: 10.1001/jama.2016.6330
2016
Cited 188 times
Effect of Aspirin on Development of ARDS in At-Risk Patients Presenting to the Emergency Department
<h3>Importance</h3> Management of acute respiratory distress syndrome (ARDS) remains largely supportive. Whether early intervention can prevent development of ARDS remains unclear. <h3>Objective</h3> To evaluate the efficacy and safety of early aspirin administration for the prevention of ARDS. <h3>Design, Setting, and Participants</h3> A multicenter, double-blind, placebo-controlled, randomized clinical trial conducted at 16 US academic hospitals. Between January 2, 2012, and November 17, 2014, 7673 patients at risk for ARDS (Lung Injury Prediction Score ≥4) in the emergency department were screened and 400 were randomized. Ten patients were excluded, leaving 390 in the final modified intention-to-treat analysis cohort. <h3>Interventions</h3> Administration of aspirin, 325-mg loading dose followed by 81 mg/d (n = 195) or placebo (n = 195) within 24 hours of emergency department presentation and continued to hospital day 7, discharge, or death. <h3>Main Outcomes and Measures</h3> The primary outcome was the development of ARDS by study day 7. Secondary measures included ventilator-free days, hospital and intensive care unit length of stay, 28-day and 1-year survival, and change in serum biomarkers associated with ARDS. A final α level of .0737 (α = .10 overall) was required for statistical significance of the primary outcome. <h3>Results</h3> Among 390 analyzed patients (median age, 57 years; 187 [48%] women), the median (IQR) hospital length of stay was 6 3-10) days. Administration of aspirin, compared with placebo, did not significantly reduce the incidence of ARDS at 7 days (10.3% vs 8.7%, respectively; odds ratio, 1.24 [92.6% CI, 0.67 to 2.31],<i>P</i> = .53). No significant differences were seen in secondary outcomes: ventilator-free to day 28, mean (SD), 24.9 (7.4) days vs 25.2 (7.0) days (mean [90% CI] difference, −0.26 [−1.46 to 0.94] days;<i>P</i> = .72); ICU length of stay, mean (SD), 5.2 (7.0) days vs 5.4 (7.0) days (mean [90% CI] difference, −0.16 [−1.75 to 1.43] days;<i>P</i> = .87); hospital length of stay, mean (SD), 8.8 (10.3) days vs 9.0 (9.9) days (mean [90% CI] difference, −0.27 [−1.96 to 1.42] days;<i>P</i> = .79); or 28-day survival, 90% vs 90% (hazard ratio [90% CI], 1.03 [0.60 to 1.79];<i>P</i> = .92) or 1-year survival, 73% vs 75% (hazard ratio [90% CI], 1.06 [0.75 to 1.50];<i>P</i> = .79). Bleeding-related adverse events were infrequent in both groups (aspirin vs placebo, 5.6% vs 2.6%; odds ratio [90% CI], 2.27 [0.92 to 5.61];<i>P</i> = .13). <h3>Conclusions and Relevance</h3> Among at-risk patients presenting to the ED, the use of aspirin compared with placebo did not reduce the risk of ARDS at 7 days. The findings of this phase 2b trial do not support continuation to a larger phase 3 trial. <h3>Trial Registration</h3> clinicaltrials.gov Identifier:NCT01504867
DOI: 10.1097/ccm.0b013e318265ea46
2013
Cited 176 times
Methods of Blood Pressure Measurement in the ICU*
Objective: Minimal clinical research has investigated the significance of different blood pressure monitoring techniques in the ICU and whether systolic vs. mean blood pressures should be targeted in therapeutic protocols and in defining clinical study cohorts. The objectives of this study are to compare real-world invasive arterial blood pressure with noninvasive blood pressure, and to determine if differences between the two techniques have clinical implications. Design: We conducted a retrospective study comparing invasive arterial blood pressure and noninvasive blood pressure measurements using a large ICU database. We performed pairwise comparison between concurrent measures of invasive arterial blood pressure and noninvasive blood pressure. We studied the association of systolic and mean invasive arterial blood pressure and noninvasive blood pressure with acute kidney injury, and with ICU mortality. Setting: Adult intensive care units at a tertiary care hospital. Patients: Adult patients admitted to intensive care units between 2001 and 2007. Interventions: None. Measurements and Main Results: Pairwise analysis of 27,022 simultaneously measured invasive arterial blood pressure/noninvasive blood pressure pairs indicated that noninvasive blood pressure overestimated systolic invasive arterial blood pressure during hypotension. Analysis of acute kidney injury and ICU mortality involved 1,633 and 4,957 patients, respectively. Our results indicated that hypotensive systolic noninvasive blood pressure readings were associated with a higher acute kidney injury prevalence (p = 0.008) and ICU mortality (p < 0.001) than systolic invasive arterial blood pressure in the same range (≤70 mm Hg). Noninvasive blood pressure and invasive arterial blood pressure mean arterial pressures showed better agreement; acute kidney injury prevalence (p = 0.28) and ICU mortality (p = 0.76) associated with hypotensive mean arterial pressure readings (≤60 mm Hg) were independent of measurement technique. Conclusions: Clinically significant discrepancies exist between invasive and noninvasive systolic blood pressure measurements during hypotension. Mean blood pressure from both techniques may be interpreted in a consistent manner in assessing patients’ prognosis. Our results suggest that mean rather than systolic blood pressure is the preferred metric in the ICU to guide therapy.
DOI: 10.1007/s00134-013-3194-3
2014
Cited 173 times
Prone positioning reduces mortality from acute respiratory distress syndrome in the low tidal volume era: a meta-analysis
Prone positioning for ARDS has been performed for decades without definitive evidence of clinical benefit. A recent multicenter trial demonstrated for the first time significantly reduced mortality with prone positioning. This meta-analysis was performed to integrate these findings with existing literature and test whether differences in tidal volume explain conflicting results among randomized trials.Studies were identified using MEDLINE, EMBASE, Cochrane Register of Controlled Trials, LILACS, and citation review. Included were randomized trials evaluating the effect on mortality of prone versus supine positioning during conventional ventilation for ARDS. The primary outcome was risk ratio of death at 60 days meta-analyzed using random effects models. Analysis stratified by high (>8 ml/kg predicted body weight) or low (≤ 8 ml/kg PBW) mean baseline tidal volume was planned a priori.Seven trials were identified including 2,119 patients, of whom 1,088 received prone positioning. Overall, prone positioning was not significantly associated with the risk ratio of death (RR 0.83; 95% CI 0.68-1.02; p = 0.073; I (2) = 64%). When stratified by high or low tidal volume, prone positioning was associated with a significant decrease in RR of death only among studies with low baseline tidal volume (RR 0.66; 95% CI 0.50-0.86; p = 0.002; I (2) = 25%). Stratification by tidal volume explained over half the between-study heterogeneity observed in the unstratified analysis.Prone positioning is associated with significantly reduced mortality from ARDS in the low tidal volume era. Substantial heterogeneity across studies can be explained by differences in tidal volume.
DOI: 10.1056/nejmoa2212663
2023
Cited 114 times
Early Restrictive or Liberal Fluid Management for Sepsis-Induced Hypotension
Among patients with sepsis-induced hypotension, the restrictive fluid strategy that was used in this trial did not result in significantly lower (or higher) mortality before discharge home by day 90 than the liberal fluid strategy.(Funded by the National Heart, Lung, and Blood Institute; CLOVERS ClinicalTrials.govnumber, NCT03434028.)
DOI: 10.1164/rccm.202108-1972oc
2022
Cited 46 times
Association of Positive End-Expiratory Pressure and Lung Recruitment Selection Strategies with Mortality in Acute Respiratory Distress Syndrome: A Systematic Review and Network Meta-analysis
Rationale: The most beneficial positive end-expiratory pressure (PEEP) selection strategy in patients with acute respiratory distress syndrome (ARDS) is unknown, and current practice is variable. Objectives: To compare the relative effects of different PEEP selection strategies on mortality in adults with moderate to severe ARDS. Methods: We conducted a network meta-analysis using a Bayesian framework. Certainty of evidence was evaluated using grading of recommendations assessment, development and evaluation methodology. Measurements and Main Results: We included 18 randomized trials (4,646 participants). Compared with a lower PEEP strategy, the posterior probability of mortality benefit from a higher PEEP without lung recruitment maneuver (LRM) strategy was 99% (risk ratio [RR], 0.77; 95% credible interval [CrI], 0.60-0.96, high certainty), the posterior probability of benefit of the esophageal pressure-guided strategy was 87% (RR, 0.77; 95% CrI, 0.48-1.22, moderate certainty), the posterior probability of benefit of a higher PEEP with brief LRM strategy was 96% (RR, 0.83; 95% CrI, 0.67-1.02, moderate certainty), and the posterior probability of increased mortality from a higher PEEP with prolonged LRM strategy was 77% (RR, 1.06; 95% CrI, 0.89-1.22, low certainty). Compared with a higher PEEP without LRM strategy, the posterior probability of increased mortality from a higher PEEP with prolonged LRM strategy was 99% (RR, 1.37; 95% CrI, 1.04-1.81, moderate certainty). Conclusions: In patients with moderate to severe ARDS, higher PEEP without LRM is associated with a lower risk of death than lower PEEP. A higher PEEP with prolonged LRM strategy is associated with increased risk of death when compared with higher PEEP without LRM.
DOI: 10.1016/s2213-2600(22)00449-0
2023
Cited 42 times
Weaning from mechanical ventilation in intensive care units across 50 countries (WEAN SAFE): a multicentre, prospective, observational cohort study
Background Current management practices and outcomes in weaning from invasive mechanical ventilation are poorly understood. We aimed to describe the epidemiology, management, timings, risk for failure, and outcomes of weaning in patients requiring at least 2 days of invasive mechanical ventilation. Methods WEAN SAFE was an international, multicentre, prospective, observational cohort study done in 481 intensive care units in 50 countries. Eligible participants were older than 16 years, admitted to a participating intensive care unit, and receiving mechanical ventilation for 2 calendar days or longer. We defined weaning initiation as the first attempt to separate a patient from the ventilator, successful weaning as no reintubation or death within 7 days of extubation, and weaning eligibility criteria based on positive end-expiratory pressure, fractional concentration of oxygen in inspired air, and vasopressors. The primary outcome was the proportion of patients successfully weaned at 90 days. Key secondary outcomes included weaning duration, timing of weaning events, factors associated with weaning delay and weaning failure, and hospital outcomes. This study is registered with ClinicalTrials.gov, NCT03255109. Findings Between Oct 4, 2017, and June 25, 2018, 10 232 patients were screened for eligibility, of whom 5869 were enrolled. 4523 (77·1%) patients underwent at least one separation attempt and 3817 (65·0%) patients were successfully weaned from ventilation at day 90. 237 (4·0%) patients were transferred before any separation attempt, 153 (2·6%) were transferred after at least one separation attempt and not successfully weaned, and 1662 (28·3%) died while invasively ventilated. The median time from fulfilling weaning eligibility criteria to first separation attempt was 1 day (IQR 0–4), and 1013 (22·4%) patients had a delay in initiating first separation of 5 or more days. Of the 4523 (77·1%) patients with separation attempts, 2927 (64·7%) had a short wean (≤1 day), 457 (10·1%) had intermediate weaning (2–6 days), 433 (9·6%) required prolonged weaning (≥7 days), and 706 (15·6%) had weaning failure. Higher sedation scores were independently associated with delayed initiation of weaning. Delayed initiation of weaning and higher sedation scores were independently associated with weaning failure. 1742 (31·8%) of 5479 patients died in the intensive care unit and 2095 (38·3%) of 5465 patients died in hospital. Interpretation In critically ill patients receiving at least 2 days of invasive mechanical ventilation, only 65% were weaned at 90 days. A better understanding of factors that delay the weaning process, such as delays in weaning initiation or excessive sedation levels, might improve weaning success rates. Funding European Society of Intensive Care Medicine, European Respiratory Society.
DOI: 10.1097/01.ccm.0000215515.49001.a2
2006
Cited 237 times
Esophageal and transpulmonary pressures in acute respiratory failure*
Pressure inflating the lung during mechanical ventilation is the difference between pressure applied at the airway opening (Pao) and pleural pressure (Ppl). Depending on the chest wall's contribution to respiratory mechanics, a given positive end-expiratory and/or end-inspiratory plateau pressure may be appropriate for one patient but inadequate or potentially injurious for another. Thus, failure to account for chest wall mechanics may affect results in clinical trials of mechanical ventilation strategies in acute respiratory distress syndrome. By measuring esophageal pressure (Pes), we sought to characterize influence of the chest wall on Ppl and transpulmonary pressure (PL) in patients with acute respiratory failure.Prospective observational study.Medical and surgical intensive care units at Beth Israel Deaconess Medical Center.Seventy patients with acute respiratory failure.Placement of esophageal balloon-catheters.Airway, esophageal, and gastric pressures recorded at end-exhalation and end-inflation Pes averaged 17.5 +/- 5.7 cm H2O at end-expiration and 21.2 +/- 7.7 cm H2O at end-inflation and were not significantly correlated with body mass index or chest wall elastance. Estimated PL was 1.5 +/- 6.3 cm H2O at end-expiration, 21.4 +/- 9.3 cm H2O at end-inflation, and 18.4 +/- 10.2 cm H2O (n = 40) during an end-inspiratory hold (plateau). Although PL at end-expiration was significantly correlated with positive end-expiratory pressure (p < .0001), only 24% of the variance in PL was explained by Pao (R = .243), and 52% was due to variation in Pes.In patients in acute respiratory failure, elevated esophageal pressures suggest that chest wall mechanical properties often contribute substantially and unpredictably to total respiratory impedance, and therefore Pao may not adequately predict PL or lung distention. Systematic use of esophageal manometry has the potential to improve ventilator management in acute respiratory failure by providing more direct assessment of lung distending pressure.
DOI: 10.1016/j.annemergmed.2006.07.007
2006
Cited 178 times
The Association of Sepsis Syndrome and Organ Dysfunction With Mortality in Emergency Department Patients With Suspected Infection
The critical care community has used standard criteria for defining the sepsis syndromes and organ dysfunction for more than 15 years; however, these criteria are not well validated in the emergency department (ED) setting. The study objectives in our ED population of patients admitted to the hospital are to determine the prevalence of the sepsis syndromes, quantify inhospital mortality and 1-year survival associated with the sepsis syndromes, and assess the inhospital and 1-year survival associated with organ dysfunctions.This was a prospective, observational, cohort study from February 1, 2000, to February 1, 2001 in an urban university hospital ED with 50,000 annual visits. There were 3,102 (96% of eligible) consecutive adult patients (aged 18 years or older) with suspected infection (as indicated by the clinical decision to obtain a blood culture) who were enrolled. Patients were screened for systemic inflammatory response syndrome (SIRS) (2 or more indicators of inflammatory response), sepsis (SIRS plus suspected infection), severe sepsis (sepsis plus organ dysfunction), septic shock (sepsis plus hypotension refractory to an initial fluid challenge), and number of organs with acute dysfunction. Main outcome measure was inhospital and 1-year mortality.Overall inhospital mortality was 4.1% and 1-year mortality was 22%. The inhospital mortality rates were suspected infection without SIRS 2.1%, sepsis 1.3%, severe sepsis 9.2%, and septic shock 28%. Compared to suspected infection without SIRS, adjusted risks of inhospital mortality were severe sepsis (odds ratio [OR] 4.0; 95% confidence interval [CI] 2.6 to 6.3) and septic shock (OR 13.8; 95% CI 6.6 to 29). Severe sepsis (OR 2.2; 95% CI 1.8 to 2.6) and septic shock (OR 3.5; 95% CI 2.3 to 5.3) also predicted 1-year mortality. The presence of SIRS criteria alone had no prognostic value for either endpoint. Each additional organ dysfunction increased the adjusted 1-year mortality hazard by 82% (pulse rate: 1.82, 95% CI 1.7 to 2.0).Immediate identification of acute organ dysfunction in ED patients with suspected infection may help select patients at increased short- and long-term mortality risk. SIRS criteria offered no additional prognostic value, whereas each additional organ dysfunction increased the 1-year mortality risk.
DOI: 10.1097/01.ccm.0000173411.06574.d5
2005
Cited 169 times
Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: Recommendations of the Working Group on Emergency Mass Critical Care‡
The Working Group on Emergency Mass Critical Care was convened by the Center for Biosecurity of the University of Pittsburgh Medical Center and the Society of Critical Care Medicine to provide recommendations to hospital and clinical leaders regarding the delivery of critical care services in the wake of a bioterrorist attack resulting in hundreds or thousands of critically ill patients. In these conditions, traditional hospital and clinical care standards in general, and critical care standards in particular, likely could no longer be maintained, and clinical guidelines for U.S. hospitals facing these situations have not been developed. The Working Group offers recommendations for this situation.
DOI: 10.1097/ta.0b013e3182858a3e
2013
Cited 168 times
Lactate clearance as a predictor of mortality in trauma patients
Initial serum lactate has been associated with mortality in trauma patients. It is not known if lactate clearance is predictive of death in a broad cohort of trauma patients.We enrolled 4,742 trauma patients who had an initial lactate measured during a 10-year period. Patients were identified via the trauma registry. Lactate clearance was calculated at 6 hours. Multivariable logistic regression was used to identify the independent contribution of both initial lactate and lactate clearance with mortality, after adjustment for severity of injury.Initial lactate level was strongly correlated with mortality: when lactate was less than 2.5 mg/dL, 5.4% (95% confidence interval [CI], 4.5-6.2%) of patients died; with lactate 2.5 mg/dL to 4.0 mg/dL, mortality was 6.4% (95% CI, 5.1-7.8%); with lactate 4.0 mg/dL or greater, mortality was 18.8% (95% CI, 15.7-21.9%). After adjustment for age, Injury Severity Score (ISS), Glasgow Coma Scale (GCS) score, heart rate, and blood pressure, initial lactate remained independently associated with increased mortality, with adjusted odds ratios of 1.0, 1.5 (95% CI, 1.1-2.0) and 3.8 (95% CI, 2.8-5.3), for lactate less than 2.5 mg/dL, 2.5 mg/dL to 4.0 mg/dL, and 4.0 mg/dL or greater, respectively. Among patients with an initially elevated lactate (≥4.0 mg/dL), lower lactate clearance at 6 hours strongly and independently predicted an increased risk of death. For lactate clearances of 60% or greater, 30% to 59%, and less than 30%, the adjusted odds ratio for death were 1.0, 3.5 (95% CI 1.2-10.4), and 4.3 (95% CI, 1.5-12.6), respectively.Both initial lactate and lactate clearance at 6 hours independently predict death in trauma patients.Prognostic study, level III.
DOI: 10.1378/chest.07-1121
2008
Cited 161 times
Risk Factors for ARDS in Patients Receiving Mechanical Ventilation for &gt; 48 h
Background Low tidal volume (Vt) ventilation for ARDS is a well-accepted concept. However, controversy persists regarding the optimal ventilator settings for patients without ARDS receiving mechanical ventilation. This study tested the hypothesis that ventilator settings influence the development of new ARDS. Methods Retrospective analysis of patients from the Multi Parameter Intelligent Monitoring of Intensive Care-II project database who received mechanical ventilation for ≥ 48 h between 2001 and 2005. Results A total of 2,583 patients required > 48 h of ventilation. Of 789 patients who did not have ARDS at hospital admission, ARDS developed in 152 patients (19%). Univariate analysis revealed high peak inspiratory pressure (odds ratio [OR], 1.53 per SD; 95% confidence interval [CI], 1.28 to 1.84), increasing positive end-expiratory pressure (OR, 1.35 per SD; 95% CI, 1.15 to 1.58), and Vt (OR, 1.36 per SD; 95% CI, 1.12 to 1.64) to be significant risk factors. Major nonventilator risk factors for ARDS included sepsis, low pH, elevated lactate, low albumin, transfusion of packed RBCs, transfusion of plasma, high net fluid balance, and low respiratory compliance. Multivariable logistic regression showed that peak pressure (OR, 1.31 per SD; 95% CI, 1.08 to 1.59), high net fluid balance (OR, 1.3 per SD; 95% CI, 1.09 to 1.56), transfusion of plasma (OR, 1.26 per SD; 95% CI, 1.07 to 1.49), sepsis (OR, 1.57; 95% CI, 1.00 to 2.45), and Vt (OR, 1.29 per SD; 95% CI, 1.02 to 1.52) were significantly associated with the development of ARDS. Conclusions The associations between the development of ARDS and clinical interventions, including high airway pressures, high Vt, positive fluid balance, and transfusion of blood products, suggests that ARDS may be a preventable complication in some cases. Low tidal volume (Vt) ventilation for ARDS is a well-accepted concept. However, controversy persists regarding the optimal ventilator settings for patients without ARDS receiving mechanical ventilation. This study tested the hypothesis that ventilator settings influence the development of new ARDS. Retrospective analysis of patients from the Multi Parameter Intelligent Monitoring of Intensive Care-II project database who received mechanical ventilation for ≥ 48 h between 2001 and 2005. A total of 2,583 patients required > 48 h of ventilation. Of 789 patients who did not have ARDS at hospital admission, ARDS developed in 152 patients (19%). Univariate analysis revealed high peak inspiratory pressure (odds ratio [OR], 1.53 per SD; 95% confidence interval [CI], 1.28 to 1.84), increasing positive end-expiratory pressure (OR, 1.35 per SD; 95% CI, 1.15 to 1.58), and Vt (OR, 1.36 per SD; 95% CI, 1.12 to 1.64) to be significant risk factors. Major nonventilator risk factors for ARDS included sepsis, low pH, elevated lactate, low albumin, transfusion of packed RBCs, transfusion of plasma, high net fluid balance, and low respiratory compliance. Multivariable logistic regression showed that peak pressure (OR, 1.31 per SD; 95% CI, 1.08 to 1.59), high net fluid balance (OR, 1.3 per SD; 95% CI, 1.09 to 1.56), transfusion of plasma (OR, 1.26 per SD; 95% CI, 1.07 to 1.49), sepsis (OR, 1.57; 95% CI, 1.00 to 2.45), and Vt (OR, 1.29 per SD; 95% CI, 1.02 to 1.52) were significantly associated with the development of ARDS. The associations between the development of ARDS and clinical interventions, including high airway pressures, high Vt, positive fluid balance, and transfusion of blood products, suggests that ARDS may be a preventable complication in some cases.
DOI: 10.1097/ccm.0b013e3182281f1b
2011
Cited 146 times
Outcome of critically ill patients with acute kidney injury using the Acute Kidney Injury Network criteria*
Acute kidney injury affects 5% to 7% of all hospitalized patients with a much higher incidence in the critically ill. The Acute Kidney Injury Network proposed a definition in which serum creatinine rises (>0.3 mg/dL) and/or oliguria (<0.5 mL/kg/hr) for a period of 6 hrs are used to detect acute kidney injury. Accurate urine output measurements as well as serum creatinine values from our database were used to detect patients with acute kidney injury and calculate their corresponding mortality risk and length of stay.Retrospective cohort study.Seven intensive care units at a large, academic, tertiary medical center.Adult patients without evidence of end-stage renal disease with more than two creatinine measurements and at least a 6-hr urine output recording who were admitted to the intensive care unit between 2001 and 2007.Medical records of all the patients were reviewed. Demographic information, laboratory results, charted data, discharge diagnoses, physiological data, and patient outcomes were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II database using a SQL query.From 19,677 adult patient records, 14,524 patients met the inclusion criteria. Fifty-seven percent developed acute kidney injury during their intensive care unit stay. Inhospital mortality rates were: 13.9%, 16.4%, 33.8% for acute kidney injury 1, 2, and 3, respectively, compared with only 6.2% in patients without acute kidney injury (p < .0001). After adjusting for multiple covariates, acute kidney injury was associated with increased hospital mortality (odds ratio 1.4 and 1.3 for acute kidney injury 1 and acute kidney injury 2 and 2.5 for acute kidney injury 3; p < .0001). Using multivariate logistic regression, we found that in patients who developed acute kidney injury, urine output alone was a better mortality predictor than creatinine alone or the combination of both.More than 50% of our critically ill patients developed some stage of acute kidney injury resulting in a stagewise increased mortality risk. However, the mortality risk associated with acute kidney injury stages 1 and 2 does not differ significantly. In light of these findings, re-evaluation of the Acute Kidney Injury Network staging criteria should be considered.
DOI: 10.1161/jaha.114.001462
2015
Cited 138 times
Effect of Cardiogenic Shock Hospital Volume on Mortality in Patients With Cardiogenic Shock
Cardiogenic shock (CS) is associated with significant morbidity, and mortality rates approach 40% to 60%. Treatment for CS requires an aggressive, sophisticated, complex, goal-oriented, therapeutic regimen focused on early revascularization and adjunctive supportive therapies, suggesting that hospitals with greater CS volume may provide better care. The association between CS hospital volume and inpatient mortality for CS is unclear.We used the Nationwide Inpatient Sample to examine 533 179 weighted patient discharges from 2675 hospitals with CS from 2004 to 2011 and divided them into quartiles of mean annual hospital CS case volume. The primary outcome was in-hospital mortality. Multivariate adjustments were performed to account for severity of illness, relevant comorbidities, hospital characteristics, and differences in treatment. Compared with the highest volume quartile, the adjusted odds ratio for inpatient mortality for persons admitted to hospitals in the lowest-volume quartile (≤27 weighted cases per year) was 1.27 (95% CI 1.15 to 1.40), whereas for admission to hospitals in the low-volume and medium-volume quartiles, the odds ratios were 1.20 (95% CI 1.08 to 1.32) and 1.12 (95% CI 1.01 to 1.24), respectively. Similarly, improved survival was observed across quartiles, with an adjusted inpatient mortality incidence of 41.97% (95% CI 40.87 to 43.08) for hospitals with the lowest volume of CS cases and a drop to 37.01% (95% CI 35.11 to 38.96) for hospitals with the highest volume of CS cases. Analysis of treatments offered between hospital quartiles revealed that the centers with volumes in the highest quartile demonstrated significantly higher numbers of patients undergoing coronary artery bypass grafting, percutaneous coronary intervention, or intra-aortic balloon pump counterpulsation. A similar relationship was demonstrated with the use of mechanical circulatory support (ventricular assist devices and extracorporeal membrane oxygenation), for which there was significantly higher use in the higher volume quartiles.We demonstrated an association between lower CS case volume and higher mortality. There is more frequent use of both standard supportive and revascularization techniques at the higher volume centers. Future directions may include examining whether early stabilization and transfer improve outcomes of patients with CS who are admitted to lower volume centers.
DOI: 10.1097/ccm.0b013e3182037a8e
2011
Cited 136 times
Proof of principle: The predisposition, infection, response, organ failure sepsis staging system*
Objective: In an effort to improve upon the traditional sepsis syndrome definitions, the predisposition, infection, response, organ dysfunction (PIRO) model was proposed to better characterize sepsis. The objective of this investigation was to derive and validate a sepsis staging system based on the PIRO concept that risk stratifies patients with suspected infection. Design: Three independent, observational, prospective cohorts were studied. A derivation cohort (n = 2,132) was used to create the PIRO score, identifying independent predictors of mortality. Individual values were assigned to create the weighted integer score for each parameter, yielding the final PIRO score. The prognostic performance was then investigated in independent internal (n = 4,618) and external (n = 1,004) validation cohorts. Setting: Two large U.S. tertiary care centers. Patients: Patients admitted to the hospital from the emergency department with suspected infection. Interventions: None. Measurements and Main Results: The PIRO staging system was created by combining components of predisposition (age, chronic obstructive pulmonary disease, liver disease, nursing home residency, and malignancy with and without metastasis), infection (pneumonia and cellulitis), response (tachypnea, bandemia, and tachycardia), and organ dysfunction (renal, respiratory, cardiac, metabolic, and hematologic). The derived PIRO score showed stepwise increase in mortality with increasing points and high discriminatory ability with an area under the curve of 0.90 in the derivation cohort, 0.86 in internal validation, and 0.83 in external validation. Conclusions: This study provides evidence-based support for the PIRO approach to sepsis staging. Future efforts may utilize this approach with additional parameters (e.g., genetics and novel biochemical markers) to develop further the PIRO stratification system.
DOI: 10.1007/s00134-016-4423-3
2016
Cited 136 times
Quantifying unintended exposure to high tidal volumes from breath stacking dyssynchrony in ARDS: the BREATHE criteria
Breath stacking dyssynchrony generates higher tidal volumes than intended, potentially increasing lung injury risk in acute respiratory distress syndrome (ARDS). Lack of validated criteria to quantify breath stacking dyssynchrony contributes to its under-recognition. This study evaluates performance of novel, objective criteria for quantifying breath stacking dyssynchrony (BREATHE criteria) compared to existing definitions and tests if neuromuscular blockade eliminates high-volume breath stacking dyssynchrony in ARDS.Airway flow and pressure were recorded continuously for up to 72 h in 33 patients with ARDS receiving volume-preset assist-control ventilation. The flow-time waveform was integrated to calculate tidal volume breath-by-breath. The BREATHE criteria considered five domains in evaluating for breath stacking dyssynchrony: ventilator cycling, interval expiratory volume, cumulative inspiratory volume, expiratory time, and inspiratory time.The observed tidal volume of BREATHE stacked breaths was 11.3 (9.7-13.3) mL/kg predicted body weight, significantly higher than the preset volume [6.3 (6.0-6.8) mL/kg; p < 0.001]. BREATHE identified more high-volume breaths (≥2 mL/kg above intended volume) than the other existing objective criteria for breath stacking [27 (7-59) vs 19 (5-46) breaths/h; p < 0.001]. Agreement between BREATHE and visual waveform inspection was high (raw agreement 96.4-98.1 %; phi 0.80-0.92). Breath stacking dyssynchrony was near-completely eliminated during neuromuscular blockade [0 (0-1) breaths/h; p < 0.001].The BREATHE criteria provide an objective definition of breath stacking dyssynchrony emphasizing occult exposure to high tidal volumes. BREATHE identified high-volume breaths missed by other methods for quantifying this dyssynchrony. Neuromuscular blockade prevented breath stacking dyssynchrony, assuring provision of the intended lung-protective strategy.
DOI: 10.1007/s00134-017-4750-z
2017
Cited 136 times
Current challenges in the management of sepsis in ICUs in resource-poor settings and suggestions for the future
Sepsis is a major reason for intensive care unit (ICU) admission, also in resource-poor settings. ICUs in low- and middle-income countries (LMICs) face many challenges that could affect patient outcome. To describe differences between resource-poor and resource-rich settings regarding the epidemiology, pathophysiology, economics and research aspects of sepsis. We restricted this manuscript to the ICU setting even knowing that many sepsis patients in LMICs are treated outside an ICU. Although many bacterial pathogens causing sepsis in LMICs are similar to those in high-income countries, resistance patterns to antimicrobial drugs can be very different; in addition, causes of sepsis in LMICs often include tropical diseases in which direct damaging effects of pathogens and their products can sometimes be more important than the response of the host. There are substantial and persisting differences in ICU capacities around the world; not surprisingly the lowest capacities are found in LMICs, but with important heterogeneity within individual LMICs. Although many aspects of sepsis management developed in rich countries are applicable in LMICs, implementation requires strong consideration of cost implications and the important differences in resources. Addressing both disease-specific and setting-specific factors is important to improve performance of ICUs in LMICs. Although critical care for severe sepsis is likely cost-effective in LMIC setting, more detailed evaluation at both at a macro- and micro-economy level is necessary. Sepsis management in resource-limited settings is a largely unexplored frontier with important opportunities for research, training, and other initiatives for improvement.
DOI: 10.1097/shk.0b013e31815dd92f
2008
Cited 125 times
CIRCULATING ANGIOPOIETIN 2 CORRELATES WITH MORTALITY IN A SURGICAL POPULATION WITH ACUTE LUNG INJURY/ADULT RESPIRATORY DISTRESS SYNDROME
There are few blood biomarkers predictive of mortality in adult respiratory distress syndrome (ARDS), and none that currently serve as therapeutic targets. Here, we ask whether a circulating protein angiopoietin 2 (Ang2) correlates with severity of lung injury and mortality in a surgical intensive care unit cohort with acute lung injury (ALI)/ARDS. Tie 2 is a tyrosine kinase receptor expressed on endothelial cells. One ligand, angiopoietin 1, phosphorylates Tie 2 and stabilizes adult vasculature. An alternate ligand, Ang2, serves as a context-dependent antagonist and disrupts barrier function. Previously, our laboratory detected high circulating Ang2 levels in septic patients and a correlation with low Pao2/Fio2. In this study, daily plasma was collected in 63 surgical intensive care unit patients. Eighteen patients met clinical criteria for ALI or ARDS. The median Ang2 at admission in patients who never developed ALI/ARDS was 3.7 ng/mL (interquartile range [IQR], 5.6; n = 45). The Ang2 on the day a patient met criteria for ALI/ARDS was 5.3 ng/mL (IQR, 6.7) for survivors (n = 11) and 19.8 ng/mL (IQR, 19.2) for nonsurvivors (n = 7; P = 0.004). To explore the mechanism of high Ang 2 leading to increased permeability, plasma from patients with ALI was applied to cultured lung endothelial cells and found to disrupt normal junctional architecture. This effect can be rescued with the Tie 2 agonist angiopoietin 1. A patient's convalescent (low Ang2) plasma did not disrupt junctional architecture. Although further studies with larger sample sizes will be needed to confirm these results, high Ang2 in critically ill patients with ALI/ARDS is associated with a poor outcome. These data, coupled with our cell culture experiments, suggest that antagonism of Ang2 may provide a future novel therapeutic target for ARDS.
DOI: 10.1378/chest.10-0891
2011
Cited 124 times
Prehospitalization Antiplatelet Therapy Is Associated With a Reduced Incidence of Acute Lung Injury
Acute lung injury (ALI) is a potentially fatal lung disease with few treatment options. Platelet activation is a key component of ALI pathophysiology and may provide an opportunity for prevention strategies. We examined the association of prehospitalization antiplatelet therapy with development of ALI in critically ill patients.All Olmsted County, Minnesota, residents with a medical ICU admission in the year 2006 were evaluated. Patients with at least one major risk factor for ALI who did not meet criteria for ALI at the time of hospital admission were included in the analysis. Baseline characteristics, major risk factors for ALI, the presence of antiplatelet therapy at the time of hospitalization, and the propensity to receive this therapy were determined. The primary outcome was ALI or ARDS during the hospitalization. Secondary outcomes were ICU and hospital-free days and ICU and hospital mortality.A total of 161 patients were evaluated. Seventy-nine (49%) were receiving antiplatelet therapy at hospital admission; 33 (21%) developed ALI/ARDS. Antiplatelet therapy was associated with a reduced incidence of ALI/ARDS (12.7% vs 28.0%; OR, 0.37; 95% CI, 0.16-0.84; P = .02). This association remained significant after adjusting for confounding variables.Prehospitalization antiplatelet therapy was associated with a reduced incidence of ALI/ARDS. If confirmed in a more diverse patient population, these results would support the use of antiplatelet agents in an ALI prevention trial.
DOI: 10.1152/japplphysiol.00835.2009
2010
Cited 121 times
Esophageal pressures in acute lung injury: do they represent artifact or useful information about transpulmonary pressure, chest wall mechanics, and lung stress?
Acute lung injury can be worsened by inappropriate mechanical ventilation, and numerous experimental studies suggest that ventilator-induced lung injury is increased by excessive lung inflation at end inspiration or inadequate lung inflation at end expiration. Lung inflation depends not only on airway pressures from the ventilator but, also, pleural pressure within the chest wall. Although esophageal pressure (Pes) measurements are often used to estimate pleural pressures in healthy subjects and patients, they are widely mistrusted and rarely used in critical illness. To assess the credibility of Pes as an estimate of pleural pressure in critically ill patients, we compared Pes measurements in 48 patients with acute lung injury with simultaneously measured gastric and bladder pressures (Pga and P blad ). End-expiratory Pes, Pga, and P blad were high and varied widely among patients, averaging 18.6 ± 4.7, 18.4 ± 5.6, and 19.3 ± 7.8 cmH 2 O, respectively (mean ± SD). End-expiratory Pes was correlated with Pga ( P = 0.0004) and P blad ( P = 0.0104) and unrelated to chest wall compliance. Pes-Pga differences were consistent with expected gravitational pressure gradients and transdiaphragmatic pressures. Transpulmonary pressure (airway pressure − Pes) was −2.8 ± 4.9 cmH 2 O at end exhalation and 8.3 ± 6.2 cmH 2 O at end inflation, values consistent with effects of mediastinal weight, gravitational gradients in pleural pressure, and airway closure at end exhalation. Lung parenchymal stress measured directly as end-inspiratory transpulmonary pressure was much less than stress inferred from the plateau airway pressures and lung and chest wall compliances. We suggest that Pes can be used to estimate transpulmonary pressures that are consistent with known physiology and can provide meaningful information, otherwise unavailable, in critically ill patients.
DOI: 10.1016/j.annemergmed.2018.03.039
2018
Cited 121 times
Liberal Versus Restrictive Intravenous Fluid Therapy for Early Septic Shock: Rationale for a Randomized Trial
Prompt intravenous fluid therapy is a fundamental treatment for patients with septic shock. However, the optimal approach for administering intravenous fluid in septic shock resuscitation is unknown. Two competing strategies are emerging: a liberal fluids approach, consisting of a larger volume of initial fluid (50 to 75 mL/kg [4 to 6 L in an 80-kg adult] during the first 6 hours) and later use of vasopressors, versus a restrictive fluids approach, consisting of a smaller volume of initial fluid (≤30 mL/kg [≤2 to 3 L]), with earlier reliance on vasopressor infusions to maintain blood pressure and perfusion. Early fluid therapy may enhance or maintain tissue perfusion by increasing venous return and cardiac output. However, fluid administration may also have deleterious effects by causing edema within vital organs, leading to organ dysfunction and impairment of oxygen delivery. Conversely, a restrictive fluids approach primarily relies on vasopressors to reverse hypotension and maintain perfusion while limiting the administration of fluid. Both strategies have some evidence to support their use but lack robust data to confirm the benefit of one strategy over the other, creating clinical and scientific equipoise. As part of the National Heart, Lung, and Blood Institute Prevention and Early Treatment of Acute Lung Injury Network, we designed a randomized clinical trial to compare the liberal and restrictive fluids strategies, the Crystalloid Liberal or Vasopressor Early Resuscitation in Sepsis trial. The purpose of this article is to review the current literature on approaches to early fluid resuscitation in adults with septic shock and outline the rationale for the upcoming trial. Prompt intravenous fluid therapy is a fundamental treatment for patients with septic shock. However, the optimal approach for administering intravenous fluid in septic shock resuscitation is unknown. Two competing strategies are emerging: a liberal fluids approach, consisting of a larger volume of initial fluid (50 to 75 mL/kg [4 to 6 L in an 80-kg adult] during the first 6 hours) and later use of vasopressors, versus a restrictive fluids approach, consisting of a smaller volume of initial fluid (≤30 mL/kg [≤2 to 3 L]), with earlier reliance on vasopressor infusions to maintain blood pressure and perfusion. Early fluid therapy may enhance or maintain tissue perfusion by increasing venous return and cardiac output. However, fluid administration may also have deleterious effects by causing edema within vital organs, leading to organ dysfunction and impairment of oxygen delivery. Conversely, a restrictive fluids approach primarily relies on vasopressors to reverse hypotension and maintain perfusion while limiting the administration of fluid. Both strategies have some evidence to support their use but lack robust data to confirm the benefit of one strategy over the other, creating clinical and scientific equipoise. As part of the National Heart, Lung, and Blood Institute Prevention and Early Treatment of Acute Lung Injury Network, we designed a randomized clinical trial to compare the liberal and restrictive fluids strategies, the Crystalloid Liberal or Vasopressor Early Resuscitation in Sepsis trial. The purpose of this article is to review the current literature on approaches to early fluid resuscitation in adults with septic shock and outline the rationale for the upcoming trial.
DOI: 10.1007/s00134-014-3318-4
2014
Cited 117 times
Association between tidal volume size, duration of ventilation, and sedation needs in patients without acute respiratory distress syndrome: an individual patient data meta-analysis
Mechanical ventilation with lower tidal volumes (≤6 ml/kg of predicted body weight, PBW) could benefit patients without acute respiratory distress syndrome (ARDS). However, tidal volume reduction could be associated with increased patient discomfort and sedation needs, and consequent longer duration of ventilation. The aim of this individual patient data meta-analysis was to assess the associations between tidal volume size, duration of mechanical ventilation, and sedation needs in patients without ARDS.Studies comparing ventilation with different tidal volume sizes in patients without ARDS were screened for inclusion. Corresponding authors were asked to provide individual participant data. Patients were assigned to three groups based on tidal volume size (≤6 ml/kg PBW, 6-10 ml/kg PBW, or ≥10 ml/kg PBW). Ventilator-free days, alive at day 28, and dose and duration of sedation (propofol and midazolam), analgesia (fentanyl and morphine), and neuromuscular blockade (NMB) were compared.Seven investigations (2,184 patients) were included in the analysis. The number of patients breathing without assistance by day 28 was higher in the group ventilated with tidal volume ≤6 ml/kg PBW compared to those ventilated with tidal volume ≥10 ml/kg PBW (93.1 vs. 88.6%; p = 0.027, respectively). Only two investigations (187 patients) could be included in the meta-analysis of sedation needs. There were neither differences in the percentage of study days that patients received sedatives, opioids, or NMBA nor in the total dose of benzodiazepines, propofol, opioids, and NMBA.This meta-analysis suggests that use of lower tidal volumes in patients without ARDS at the onset of mechanical ventilation could be associated with shorter duration of ventilation. Use of lower tidal volumes seems not to affect sedation or analgesia needs, but this must be confirmed in a robust, well-powered randomized controlled trial.
DOI: 10.3109/10641955.2015.1090581
2015
Cited 116 times
Racial Disparities in Comorbidities, Complications, and Maternal and Fetal Outcomes in Women With Preeclampsia/eclampsia
Objective: The mechanisms leading to worse outcomes in African-American (AA) women with preeclampsia/eclampsia remain unclear. Our objective was to identify racial differences in maternal comorbidities, peripartum characteristics, and maternal and fetal outcomes. Methods/Results: When compared to white women with preeclampsia/eclampsia, AA women had an increased unadjusted risk of inpatient maternal mortality (OR 3.70, 95% CI: 2.19–6.24). After adjustment for covariates, in-hospital mortality for AA women remained higher than that for white women (OR 2.85, 95% CI: 1.38–5.53), while the adjusted risk of death among Hispanic women did not differ from that for white women. We also found an increased risk of intrauterine fetal death (IUFD) among AA women. When compared to white women with preeclampsia, AA women had an increased unadjusted odds of IUFD (OR 2.78, 95% CI: 2.49–3.11), which remained significant after adjustment for covariates (adjusted OR 2.45, 95% CI: 2.14–2.82). In contrast, IUFD among Hispanic women did not differ from that for white women after adjusting for covariates. Conclusions and Relevance: Our data suggest that African-American women are more likely to have risk factors for preeclampsia and more likely to suffer an adverse outcome during peripartum care. Future research should examine whether controlling co-morbidities and other risk factors will help to alleviate racial disparities in outcomes in this cohort of women.
DOI: 10.1097/aln.0b013e318215e254
2011
Cited 113 times
Influence of Low Tidal Volume Ventilation on Time to Extubation in Cardiac Surgical Patients
Background Low tidal volumes have been associated with improved outcomes in patients with established acute lung injury. The role of low tidal volume ventilation in patients without lung injury is still unresolved. We hypothesized that such a strategy in patients undergoing elective surgery would reduce ventilator-associated lung injury and that this improvement would lead to a shortened time to extubation Methods A single-center randomized controlled trial was undertaken in 149 patients undergoing elective cardiac surgery. Ventilation with 6 versus 10 ml/kg tidal volume was compared. Ventilator settings were applied immediately after anesthesia induction and continued throughout surgery and the subsequent intensive care unit stay. The primary endpoint of the study was time to extubation. Secondary endpoints included the proportion of patients extubated at 6 h and indices of lung mechanics and gas exchange as well as patient clinical outcomes. Results Median ventilation time was not significantly different in the low tidal volume group; a median (interquartile range) of 450 (264-1,044) min was achieved compared with 643 (417-1,032) min in the control group (P = 0.10). However, a higher proportion of patients in the low tidal volume group was free of any ventilation at 6 h: 37.3% compared with 20.3% in the control group (P = 0.02). In addition, fewer patients in the low tidal volume group required reintubation (1.3 vs. 9.5%; P = 0.03). Conclusions Although reduction of tidal volume in mechanically ventilated patients undergoing elective cardiac surgery did not significantly shorten time to extubation, several improvements were observed in secondary outcomes. When these data are combined with a lack of observed complications, a strategy of reduced tidal volume could still be beneficial in this patient population.
DOI: 10.1161/circimaging.112.973818
2012
Cited 102 times
Subclinical Left Ventricular Dysfunction in Preeclamptic Women With Preserved Left Ventricular Ejection Fraction
Background— Patients with preeclampsia are at risk for cardiovascular disease. Changes in cardiac function are subtle in preeclampsia and are difficult to quantify with conventional imaging. Strain measurements using speckle-tracking echocardiography have been used to sensitively quantify abnormalities in other disease settings. Methods and Results— We evaluated the feasibility and sensitivity of strain imaging using speckle-tracking echocardiography in women with preeclampsia. Forty-seven women were enrolled in this pilot study and 39 were analyzed: 11 with preeclampsia, 17 without a hypertensive disorder, and 11 with nonproteinuric hypertension. Echocardiographic ejection fraction and global peak longitudinal, radial, and circumferential strain were measured. Longitudinal strain was significantly worsened in women with preeclampsia compared with women without a hypertensive disorder ( P =0.0001). Similar results were observed for radial strain ( P =0.006) and circumferential strain ( P =0.03). Women with preeclampsia also had significantly worsened longitudinal ( P =0.04), radial ( P =0.01), and circumferential ( P =0.002) strain compared with women with nonproteinuric hypertension. Women with preeclampsia did not have a significantly different ejection fraction compared with women without a hypertensive disorder ( P =0.16) and women with nonproteinuric hypertension ( P =0.44). Conclusions— Myocardial strain imaging using speckle tracking is more sensitive than left ventricular ejection fraction to detect differences in left ventricular systolic function in women with and without preeclampsia.
DOI: 10.1007/s00134-016-4403-7
2016
Cited 97 times
Mortality and pulmonary mechanics in relation to respiratory system and transpulmonary driving pressures in ARDS
DOI: 10.1513/annalsats.201704-338ot
2017
Cited 95 times
Higher PEEP versus Lower PEEP Strategies for Patients with Acute Respiratory Distress Syndrome. A Systematic Review and Meta-Analysis
Rationale: Higher positive end-expiratory pressure (PEEP) levels may reduce atelectrauma, but increase over-distention lung injury. Whether higher PEEP improves clinical outcomes among patients with acute respiratory distress syndrome (ARDS) is unclear.Objectives: To compare clinical outcomes of mechanical ventilation strategies using higher PEEP levels versus lower PEEP strategies in patients with ARDS.Methods: We performed a systematic review and meta-analysis of clinical trials investigating mechanical ventilation strategies using higher versus lower PEEP levels. We used random effects models to evaluate the effect of higher PEEP on 28-day mortality, organ failure, ventilator-free days, barotrauma, oxygenation, and ventilation.Results: We identified eight randomized trials comparing higher versus lower PEEP strategies, enrolling 2,728 patients with ARDS. Patients were 55 (±16) (mean ± SD) years old and 61% were men. Mean PEEP in the higher PEEP groups was 15.1 (±3.6) cm H2O as compared with 9.1 (±2.7) cm H2O in the lower PEEP groups. Primary analysis excluding two trials that did not use lower Vt ventilation in the lower PEEP control groups did not demonstrate significantly reduced mortality for patients receiving higher PEEP as compared with a lower PEEP (six trials; 2,580 patients; relative risk, 0.91; 95% confidence interval [CI] = 0.80–1.03). A higher PEEP strategy also did not significantly decrease barotrauma, new organ failure, or ventilator-free days when compared with a lower PEEP strategy (moderate-level evidence). Quality of evidence for primary analyses was downgraded for precision, as CIs of outcomes included estimates that would result in divergent recommendations for use of higher PEEP. Secondary analysis, including trials that did not use low Vt in low-PEEP control groups, showed significant mortality reduction for high-PEEP strategies (eight trials; 2,728 patients; relative risk, 0.84; 95% CI = 0.71–0.99), with greater mortality benefit observed for high PEEP in trials that did not use lower Vts in the low-PEEP control group (P = 0.02). Analyses stratifying by use of recruitment maneuvers (P for interaction = 0.69), or use of physiological targets to set PEEP versus PEEP/FiO2 tables (P for interaction = 0.13), did not show significant effect modification.Conclusions: Use of higher PEEP is unlikely to improve clinical outcomes among unselected patients with ARDS.
DOI: 10.1213/ane.0000000000000100
2014
Cited 94 times
Increased Glycemic Variability in Patients with Elevated Preoperative HbA1C Predicts Adverse Outcomes Following Coronary Artery Bypass Grafting Surgery
BACKGROUND: In the setting of protocolized glycemic control, the relationship between postoperative glycemic variability on major adverse events (MAEs) after cardiac surgery is unknown for patients with increased preoperative hemoglobin A1C (HbA1C >6.5%). In this study, we sought to establish (a) whether postoperative glycemic variability is associated with MAEs after CABG surgery and (b) whether preoperative HbA1C could identify patients at increased risk of postoperative glycemic variability. METHODS: Patients undergoing coronary artery bypass grafting with or without valvular surgery from January 2008 to May 2011 were enrolled in this prospective, single-center, observational cohort study. Demographic, intraoperative, and postoperative outcome data were obtained from institutional data collected for the Society of Thoracic Surgery (STS) database. The primary outcome, MAE was a composite of in-hospital death, myocardial infarction (MI), reoperations, sternal infection, cardiac tamponade, pneumonia, stroke, or renal failure. Glycemic variability in the postoperative period was assessed by the coefficient of variation (CV). CV was used as quartiles for the multivariate logistic regression. Variable selection in multivariable modeling was based on clinical and statistical significance and was performed in a hierarchical fashion. RESULTS: Of the 1461 patients enrolled, 9.8% had an MAE. Based on the established target of HbA1C <6.5% for the diagnosis of diabetes mellitus, we considered HbA1C as a binary variable (<6.5% and ≥6.5%) in our primary analysis. Multivariate logistic regression analyses for the preoperative variables only revealed that preoperative HbA1C (odds ratio [OR], 1.6; 95% confidence interval [CI], 1.1–2.3; P = 0.02), history of MI (OR, 1.9; 95% CI, 1.3–2.8; P = 0.001), and STS risk score per quartile (OR, 1.7; 95% CI, 1.4–2.1; P < 0.001) were associated with MAEs. When postoperative variables were included in the analyses, postoperative glycemic variability (CV per quartile) in the intensive care unit (OR, 1.3; 95% CI, 1.1–1.5; P = 0.03), mean glucose levels averaged over the first 4 postoperative hours (OR, 1.2; 95% CI, 1.0–1.4; P = 0.03), history of MI (OR, 1.8; 95% CI, 1.2–2.6; P = 0.004), and STS risk score per quartile (OR, 1.6; 95% CI, 1.3–2.0; P < 0.001) were associated with MAEs. Glycemic variability as assessed by CV was increased postoperatively in patients with preoperative HbA1C ≥6.5% (0.20 ± 0.09 vs 0.16 ± 0.07, P < 0.001). CONCLUSIONS: Postoperative glycemic variability is associated with MAEs after cardiac surgery. Glycemic variability is only measured when the patient leaves the intensive care unit, and there is no opportunity to intervene earlier. Preoperative HbA1C identifies risk for postoperative glycemic variability and may provide a more rational guide for targeting measures to reduce variability.
DOI: 10.1513/annalsats.201704-337ot
2017
Cited 94 times
Low Tidal Volume versus Non–Volume-Limited Strategies for Patients with Acute Respiratory Distress Syndrome. A Systematic Review and Meta-Analysis
Trials investigating use of lower tidal volumes and inspiratory pressures for patients with acute respiratory distress syndrome (ARDS) have shown mixed results.To compare clinical outcomes of mechanical ventilation strategies that limit tidal volumes and inspiratory pressures (LTV) to strategies with tidal volumes of 10 to 15 ml/kg among patients with ARDS.This is a systematic review and meta-analysis of clinical trials investigating LTV mechanical ventilation strategies. We used random effects models to evaluate the effect of LTV on 28-day mortality, organ failure, ventilator-free days, barotrauma, oxygenation, and ventilation. Our primary analysis excluded trials for which the LTV strategy was combined with the additional strategy of higher positive end-expiratory pressure (PEEP), but these trials were included in a stratified sensitivity analysis. We performed metaregression of tidal volume gradient achieved between intervention and control groups on mortality effect estimates. We used Grading of Recommendations Assessment, Development, and Evaluation methodology to determine the quality of evidence.Seven randomized trials involving 1,481 patients met eligibility criteria for this review. Mortality was not significantly lower for patients receiving an LTV strategy (33.6%) as compared with control strategies (40.4%) (relative risk [RR], 0.87; 95% confidence interval [CI], 0.70-1.08; heterogeneity statistic I2 = 46%), nor did an LTV strategy significantly decrease barotrauma or ventilator-free days when compared with a lower PEEP strategy. Quality of evidence for clinical outcomes was downgraded for imprecision. Metaregression showed a significant inverse association between larger tidal volume gradient between LTV and control groups and log odds ratios for mortality (β, -0.1587; P = 0.0022). Sensitivity analysis including trials that protocolized an LTV/high PEEP cointervention showed lower mortality associated with LTV (nine trials and 1,629 patients; RR, 0.80; 95% CI, 0.66-0.98; I2 = 46%). Compared with trials not using a high PEEP cointervention, trials using a strategy of LTV combined with high PEEP showed a greater mortality benefit (RR, 0.58; 95% CI, 0.41-0.82; P for interaction = 0.05).The trend toward lower mortality with LTV ventilation in the primary analysis and the significant relationship between the degree of tidal volume reduction and the mortality effect together suggest, but do not prove, that LTV ventilation improves mortality among critically ill adults with ARDS.
DOI: 10.1007/s12028-016-0328-9
2016
Cited 87 times
The Effect of Positive End-Expiratory Pressure on Intracranial Pressure and Cerebral Hemodynamics
Lung protective ventilation has not been evaluated in patients with brain injury. It is unclear whether applying positive end-expiratory pressure (PEEP) adversely affects intracranial pressure (ICP) and cerebral perfusion pressure (CPP). We aimed to evaluate the effect of PEEP on ICP and CPP in a large population of patients with acute brain injury and varying categories of acute lung injury, defined by PaO2/FiO2. Retrospective data were collected from 341 patients with severe acute brain injury admitted to the ICU between 2008 and 2015. These patients experienced a total of 28,644 paired PEEP and ICP observations. Demographic, hemodynamic, physiologic, and ventilator data at the time of the paired PEEP and ICP observations were recorded. In the adjusted analysis, a statistically significant relationship between PEEP and ICP and PEEP and CPP was found only among observations occurring during periods of severe lung injury. For every centimeter H2O increase in PEEP, there was a 0.31 mmHg increase in ICP (p = 0.04; 95 % CI [0.07, 0.54]) and a 0.85 mmHg decrease in CPP (p = 0.02; 95 % CI [−1.48, −0.22]). Our results suggest that PEEP can be applied safely in patients with acute brain injury as it does not have a clinically significant effect on ICP or CPP. Further prospective studies are required to assess the safety of applying a lung protective ventilation strategy in brain-injured patients with lung injury.
DOI: 10.1097/aln.0000000000000216
2014
Cited 85 times
Predicting Risk of Postoperative Lung Injury in High-risk Surgical Patients
Abstract Background: Acute respiratory distress syndrome (ARDS) remains a serious postoperative complication. Although ARDS prevention is a priority, the inability to identify patients at risk for ARDS remains a barrier to progress. The authors tested and refined the previously reported surgical lung injury prediction (SLIP) model in a multicenter cohort of at-risk surgical patients. Methods: This is a secondary analysis of a multicenter, prospective cohort investigation evaluating high-risk patients undergoing surgery. Preoperative ARDS risk factors and risk modifiers were evaluated for inclusion in a parsimonious risk-prediction model. Multiple imputation and domain analysis were used to facilitate development of a refined model, designated SLIP-2. Area under the receiver operating characteristic curve and the Hosmer–Lemeshow goodness-of-fit test were used to assess model performance. Results: Among 1,562 at-risk patients, ARDS developed in 117 (7.5%). Nine independent predictors of ARDS were identified: sepsis, high-risk aortic vascular surgery, high-risk cardiac surgery, emergency surgery, cirrhosis, admission location other than home, increased respiratory rate (20 to 29 and ≥30 breaths/min), Fio2 greater than 35%, and Spo2 less than 95%. The original SLIP score performed poorly in this heterogeneous cohort with baseline risk factors for ARDS (area under the receiver operating characteristic curve [95% CI], 0.56 [0.50 to 0.62]). In contrast, SLIP-2 score performed well (area under the receiver operating characteristic curve [95% CI], 0.84 [0.81 to 0.88]). Internal validation indicated similar discrimination, with an area under the receiver operating characteristic curve of 0.84. Conclusions: In this multicenter cohort of patients at risk for ARDS, the SLIP-2 score outperformed the original SLIP score. If validated in an independent sample, this tool may help identify surgical patients at high risk for ARDS.
DOI: 10.1186/s13054-017-1820-0
2017
Cited 85 times
Respiratory support in patients with acute respiratory distress syndrome: an expert opinion
Acute respiratory distress syndrome (ARDS) is a common condition in intensive care unit patients and remains a major concern, with mortality rates of around 30-45% and considerable long-term morbidity. Respiratory support in these patients must be optimized to ensure adequate gas exchange while minimizing the risks of ventilator-induced lung injury. The aim of this expert opinion document is to review the available clinical evidence related to ventilator support and adjuvant therapies in order to provide evidence-based and experience-based clinical recommendations for the management of patients with ARDS.
DOI: 10.1007/s00134-018-5231-8
2018
Cited 80 times
Outcome in patients perceived as receiving excessive care across different ethical climates: a prospective study in 68 intensive care units in Europe and the USA
Whether the quality of the ethical climate in the intensive care unit (ICU) improves the identification of patients receiving excessive care and affects patient outcomes is unknown.In this prospective observational study, perceptions of excessive care (PECs) by clinicians working in 68 ICUs in Europe and the USA were collected daily during a 28-day period. The quality of the ethical climate in the ICUs was assessed via a validated questionnaire. We compared the combined endpoint (death, not at home or poor quality of life at 1 year) of patients with PECs and the time from PECs until written treatment-limitation decisions (TLDs) and death across the four climates defined via cluster analysis.Of the 4747 eligible clinicians, 2992 (63%) evaluated the ethical climate in their ICU. Of the 321 and 623 patients not admitted for monitoring only in ICUs with a good (n = 12, 18%) and poor (n = 24, 35%) climate, 36 (11%) and 74 (12%), respectively were identified with PECs by at least two clinicians. Of the 35 and 71 identified patients with an available combined endpoint, 100% (95% CI 90.0-1.00) and 85.9% (75.4-92.0) (P = 0.02) attained that endpoint. The risk of death (HR 1.88, 95% CI 1.20-2.92) or receiving a written TLD (HR 2.32, CI 1.11-4.85) in patients with PECs by at least two clinicians was higher in ICUs with a good climate than in those with a poor one. The differences between ICUs with an average climate, with (n = 12, 18%) or without (n = 20, 29%) nursing involvement at the end of life, and ICUs with a poor climate were less obvious but still in favour of the former.Enhancing the quality of the ethical climate in the ICU may improve both the identification of patients receiving excessive care and the decision-making process at the end of life.
DOI: 10.1164/rccm.201609-1771oc
2017
Cited 77 times
Favorable Neurocognitive Outcome with Low Tidal Volume Ventilation after Cardiac Arrest
Neurocognitive outcome after out-of-hospital cardiac arrest (OHCA) is often poor, even when initial resuscitation succeeds. Lower tidal volumes (Vts) attenuate extrapulmonary organ injury in other disease states and are neuroprotective in preclinical models of critical illness.To evaluate the association between Vt and neurocognitive outcome after OHCA.We performed a propensity-adjusted analysis of a two-center retrospective cohort of patients experiencing OHCA who received mechanical ventilation for at least the first 48 hours of hospitalization. Vt was calculated as the time-weighted average over the first 48 hours, in milliliters per kilogram of predicted body weight (PBW). The primary endpoint was favorable neurocognitive outcome (cerebral performance category of 1 or 2) at discharge.Of 256 included patients, 38% received time-weighted average Vt greater than 8 ml/kg PBW during the first 48 hours. Lower Vt was independently associated with favorable neurocognitive outcome in propensity-adjusted analysis (odds ratio, 1.61; 95% confidence interval [CI], 1.13-2.28 per 1-ml/kg PBW decrease in Vt; P = 0.008). This finding was robust to several sensitivity analyses. Lower Vt also was associated with more ventilator-free days (β = 1.78; 95% CI, 0.39-3.16 per 1-ml/kg PBW decrease; P = 0.012) and shock-free days (β = 1.31; 95% CI, 0.10-2.51; P = 0.034). Vt was not associated with hypercapnia (P = 1.00). Although the propensity score incorporated several biologically relevant covariates, only height, weight, and admitting hospital were independent predictors of Vt less than or equal to 8 ml/kg PBW.Lower Vt after OHCA is independently associated with favorable neurocognitive outcome, more ventilator-free days, and more shock-free days. These findings suggest a role for low-Vt ventilation after cardiac arrest.
DOI: 10.1097/ta.0000000000002694
2020
Cited 76 times
Is there a role for tissue plasminogen activator as a novel treatment for refractory COVID-19 associated acute respiratory distress syndrome?
The global pandemic of COVID-19 has begun to oversaturate the world's medical capacity to accommodate a large surge of patients with acute respiratory distress syndrome (ARDS).1 Acute respiratory distress syndrome has no effective treatment besides supportive care, with the use of ventilatory strategies encompassing low tidal volumes that limit transpulmonary pressures being the mainstay.2 A consistent finding in ARDS is the deposition of fibrin in the airspaces and lung parenchyma, along with fibrin-platelet microthrombi in the pulmonary vasculature, which contribute to the development of progressive respiratory dysfunction and right heart failure.3–5 Similar pathologic findings have now been observed in lung specimens from patients infected with COVID-19.6 This physiologically destructive activation of the clotting system in ARDS results from enhanced activation and propagation of clot formation together with suppression of fibrinolysis,7–9 and is thought to be mediated by dysfunction of the pulmonary endothelium in the case of influenza A.10 Targeting the coagulation and fibrinolytic systems to improve the treatment of ARDS has been proposed since at least 2003.11–14 In particular, the use of plasminogen activators to limit ARDS progression and reduce ARDS-induced death has received strong support from animal models15–17 and a phase 1 human clinical trial. In 2001, Hardaway and colleagues18 showed that administration of either urokinase or streptokinase to patients with terminal ARDS reduced the expected mortality from 100% to 70% with no adverse bleeding events. Importantly, the majority of patients who ultimately succumbed died from renal or hepatic failure, rather than pulmonary failure. A recent American Hospital Association assessment indicates that up to 960,000 patients may require mechanical ventilation, for which only 62,000 fully-featured ventilators are currently available, based on a 2009 survey.19 The estimated mortality rate for critically ill patients infected with COVID-19 is 22% to 64%, using statistics from early reports from Hubei Province.18,20,21 Taken together, these statistics indicate an emergent need for effective therapeutics to treat and attenuate ARDS secondary to COVID-19 and, particularly, to salvage patients who have decompensating respiratory status but no access to a mechanical ventilator or extracorporeal membrane oxygenation (ECMO). We posit that administration of tissue plasminogen activator (tPA), as a compassionate salvage approach, may have merit in this situation. Consideration of therapies that are widely available but not recognized for this indication and traditionally considered “high-risk,” such as fibrinolytic agents, is warranted in this unprecedented public health emergency, since the risk of adverse events from tPA is far outweighed by the certainty of death in patients meeting the eligibility criteria for this treatment. While the prior study by Hardaway et al. evaluating fibrinolytic therapy for treatment of ARDS used urokinase and streptokinase, the more contemporary approach to thrombolytic therapy involves the use of tPA due to higher efficacy of clot lysis with comparable bleeding risk to the other fibrinolytic agents. In addition, tPA treatment was reported to have a greater reduction of death, a larger increase in arterial Po2 and a larger decrease in arterial Pco2, compared with untreated controls, than either urokinase-plasminogen activator (uPA) or plasmin in a comprehensive meta-analysis of animal studies of acute lung injury, although none of those studies included viral-induced ARDS.16 The dose, route of administration, and duration of treatment remain to be defined, but modeling efforts by individuals interested in this approach are both needed and underway. In animal models of acute lung injury, intratracheal and intravenous dosing of fibrinolytic agents was more effective than nebulized delivery. Based on a large body of experience using tPA for acute treatment of strokes and myocardial infarctions,22,23 intravenous administration may be the easiest to implement. However, unlike the brief treatment used in those situations where a defined nidus of clot is present without ongoing widespread disruption of the hemostatic system, we believe an initial approach might be to administer 25 mg of tPA over 2 hours followed by a 25-mg tPA infusion administered over the subsequent 22 hours, with a dose not to exceed 0.9 mg/kg. The same exclusion criteria currently in place for stroke and MI treatment could be used, with responders maintained for some period on a heparin infusion after completion of the tPA treatment. Exactly which patients would qualify for this salvage treatment similarly remains to be defined, but patients with COVID-19-induced ARDS who have a pO2/FiO2 ratio less than 60 and a Pco2 greater than 60 despite prone positioning and maximal mechanical ventilatory support would seem to be ideal candidates, particularly in settings where ECMO is not a possibility. Furthermore, in scenarios where there is no further mechanical ventilation capacity, this may be appropriate for those with progressive pulmonary deterioration. Extraordinary times may call for extraordinary measures. If an observational trial of this treatment in the first series of patients is effective and safe, the approach could be readily broadened. This would have multiple patient-related and public health benefits including: (1) earlier weaning from the ventilator to free up more ventilators for other patients in need; (2) preventing patients from progressing to a need for ECMO support, which is likely to be limited in a resource-limited crisis; and (3) leveraging the availability, modest cost, and wide preexisting clinical familiarity with tPA.
DOI: 10.1136/bmj.k3030
2018
Cited 70 times
Perioperative lung protective ventilation
ABSTRACT Perioperative lung injury is a major source of postoperative morbidity, excess healthcare use, and avoidable mortality. Many potential inciting factors can lead to this condition, including intraoperative ventilator induced lung injury. Questions exist as to whether protective ventilation strategies used in the intensive care unit for patients with acute respiratory distress syndrome are equally beneficial for surgical patients, most of whom do not present with any pre-existing lung pathology. Studied both individually and in combination as a package of intraoperative lung protective ventilation, the use of low tidal volumes, moderate positive end expiratory pressure, and recruitment maneuvers have been shown to improve oxygenation and pulmonary physiology and to reduce postoperative pulmonary complications in at risk patient groups. Further work is needed to define the potential contributions of alternative ventilator strategies, limiting excessive intraoperative oxygen supplementation, use of non-invasive techniques in the postoperative period, and personalized mechanical ventilation. Although the weight of evidence strongly suggests a role for lung protective ventilation in moderate risk patient groups, definitive evidence of its benefit for the general surgical population does not exist. However, given the shift in understanding of what is needed for adequate oxygenation and ventilation under anesthesia, the largely historical arguments against the use of intraoperative lung protective ventilation may soon be outdated, on the basis of its expanding track record of safety and efficacy in multiple settings.
DOI: 10.1007/s00134-019-05829-1
2019
Cited 66 times
Ethical climate and intention to leave among critical care clinicians: an observational study in 68 intensive care units across Europe and the United States
Apart from organizational issues, quality of inter-professional collaboration during ethical decision-making may affect the intention to leave one’s job. To determine whether ethical climate is associated with the intention to leave after adjustment for country, ICU and clinicians characteristics. Perceptions of the ethical climate among clinicians working in 68 adult ICUs in 12 European countries and the US were measured using a self-assessment questionnaire, together with job characteristics and intent to leave as a sub-analysis of the Dispropricus study. The validated ethical decision-making climate questionnaire included seven factors: not avoiding decision-making at end-of-life (EOL), mutual respect within the interdisciplinary team, open interdisciplinary reflection, ethical awareness, self-reflective physician leadership, active decision-making at end-of-life by physicians, and involvement of nurses in EOL. Hierarchical mixed effect models were used to assess associations between these factors, and the intent to leave in clinicians within ICUs, within the different countries. Of 3610 nurses and 1137 physicians providing ICU bedside care, 63.1% and 62.9% participated, respectively. Of 2992 participating clinicians, 782 (26.1%) had intent to leave, of which 27% nurses, 24% junior and 22.7% senior physicians. After adjustment for country, ICU and clinicians characteristics, mutual respect OR 0.77 (95% CI 0.66- 0.90), open interdisciplinary reflection (OR 0.73 [95% CI 0.62–0.86]) and not avoiding EOL decisions (OR 0.87 [95% CI 0.77–0.98]) were all associated with a lower intent to leave. This is the first large multicenter study showing an independent association between clinicians’ intent to leave and the quality of the ethical climate in the ICU. Interventions to reduce intent to leave may be most effective when they focus on improving mutual respect, interdisciplinary reflection and active decision-making at EOL.
DOI: 10.1097/ccm.0000000000004895
2021
Cited 57 times
The Surviving Sepsis Campaign: Research Priorities for Coronavirus Disease 2019 in Critical Illness
To identify research priorities in the management, pathophysiology, and host response of coronavirus disease 2019 in critically ill patients.The Surviving Sepsis Research Committee, a multiprofessional group of 17 international experts representing the European Society of Intensive Care Medicine and Society of Critical Care Medicine, was virtually convened during the coronavirus disease 2019 pandemic. The committee iteratively developed the recommendations and subsequent document.Each committee member submitted a list of what they believed were the most important priorities for coronavirus disease 2019 research. The entire committee voted on 58 submitted questions to determine top priorities for coronavirus disease 2019 research.The Surviving Sepsis Research Committee provides 13 priorities for coronavirus disease 2019. Of these, the top six priorities were identified and include the following questions: 1) Should the approach to ventilator management differ from the standard approach in patients with acute hypoxic respiratory failure?, 2) Can the host response be modulated for therapeutic benefit?, 3) What specific cells are directly targeted by severe acute respiratory syndrome coronavirus 2, and how do these cells respond?, 4) Can early data be used to predict outcomes of coronavirus disease 2019 and, by extension, to guide therapies?, 5) What is the role of prone positioning and noninvasive ventilation in nonventilated patients with coronavirus disease?, and 6) Which interventions are best to use for viral load modulation and when should they be given?Although knowledge of both biology and treatment has increased exponentially in the first year of the coronavirus disease 2019 pandemic, significant knowledge gaps remain. The research priorities identified represent a roadmap for investigation in coronavirus disease 2019.
DOI: 10.1164/rccm.202009-3539oc
2021
Cited 51 times
Effect of Esophageal Pressure–guided Positive End-Expiratory Pressure on Survival from Acute Respiratory Distress Syndrome: A Risk-based and Mechanistic Reanalysis of the EPVent-2 Trial
Rationale: In acute respiratory distress syndrome (ARDS), the effect of positive end-expiratory pressure (PEEP) may depend on the extent to which multiorgan dysfunction contributes to risk of death, and the precision with which PEEP is titrated to attenuate atelectrauma without exacerbating overdistension. Objectives: To evaluate whether multiorgan dysfunction and lung mechanics modified treatment effect in the EPVent-2 (Esophageal Pressure-guided Ventilation 2) trial, a multicenter trial of esophageal pressure (Pes)-guided PEEP versus empirical high PEEP in moderate to severe ARDS. Methods: This post hoc reanalysis of the EPVent-2 trial evaluated for heterogeneity of treatment effect on mortality by baseline multiorgan dysfunction, determined via Acute Physiology and Chronic Health Evaluation II (APACHE-II). It also evaluated whether PEEP titrated to end-expiratory transpulmonary pressure near 0 cm H2O was associated with survival. Measurements and Main Results: All 200 trial participants were included. Treatment effect on 60-day mortality differed by multiorgan dysfunction severity (P = 0.03 for interaction). Pes-guided PEEP was associated with lower mortality among patients with APACHE-II less than the median value (hazard ratio, 0.43; 95% confidence interval, 0.20-0.92) and may have had the opposite effect in patients with higher APACHE-II (hazard ratio, 1.69; 95% confidence interval, 0.93-3.05). Independent of treatment group or multiorgan dysfunction severity, mortality was lowest when PEEP titration achieved end-expiratory transpulmonary pressure near 0 cm H2O. Conclusions: The effect on survival of Pes-guided PEEP, compared with empirical high PEEP, differed by multiorgan dysfunction severity. Independent of multiorgan dysfunction, PEEP titrated to end-expiratory transpulmonary pressure closer to 0 cm H2O was associated with greater survival than more positive or negative values. These findings warrant prospective testing in a future trial.
DOI: 10.1164/rccm.201907-1427oc
2021
Cited 41 times
Reverse Trigger Phenotypes in Acute Respiratory Distress Syndrome
Rationale: Reverse triggering is an underexplored form of dyssynchrony with important clinical implications in patients with acute respiratory distress syndrome.Objectives: This retrospective study identified reverse trigger phenotypes and characterized their impacts on Vt and transpulmonary pressure.Methods: Fifty-five patients with acute respiratory distress syndrome on pressure-regulated ventilator modes were included. Four phenotypes of reverse triggering with and without breath stacking and their impact on lung inflation and deflation were investigated.Measurements and Main Results: Inflation volumes, respiratory muscle pressure generation, and transpulmonary pressures were determined and phenotypes differentiated using Campbell diagrams of respiratory activity. Reverse triggering was detected in 25 patients, 15 with associated breath stacking, and 13 with stable reverse triggering consistent with respiratory entrainment. Phenotypes were associated with variable levels of inspiratory effort (mean 4-10 cm H2O per phenotype). Early reverse triggering with early expiratory relaxation increased Vts (88 [64-113] ml) and inspiratory transpulmonary pressures (3 [2-3] cm H2O) compared with passive breaths. Early reverse triggering with delayed expiratory relaxation increased Vts (128 [86-170] ml) and increased inspiratory and mean-expiratory transpulmonary pressure (7 [5-9] cm H2O and 5 [4-6] cm H2O). Mid-cycle reverse triggering (initiation during inflation and maximal effort during deflation) increased Vt (51 [38-64] ml), increased inspiratory and mean-expiratory transpulmonary pressure (3 [2-4] cm H2O and 3 [2-3] cm H2O), and caused incomplete exhalation. Late reverse triggering (occurring exclusively during exhalation) increased mean expiratory transpulmonary pressure (2 [1-2] cm H2O) and caused incomplete exhalation. Breath stacking resulted in large delivered volumes (176 [155-197] ml).Conclusions: Reverse triggering causes variable physiological effects, depending on the phenotype. Differentiation of phenotype effects may be important to understand the clinical impacts of these events.
DOI: 10.1016/j.chest.2021.05.047
2021
Cited 41 times
Variation in Early Management Practices in Moderate-to-Severe ARDS in the United States
Although specific interventions previously demonstrated benefit in patients with ARDS, use of these interventions is inconsistent, and patient mortality remains high. The impact of variability in center management practices on ARDS mortality rates remains unknown.What is the impact of treatment variability on mortality in patients with moderate to severe ARDS in the United States?We conducted a multicenter, observational cohort study of mechanically ventilated adults with ARDS and Pao2 to Fio2 ratio of ≤ 150 with positive end-expiratory pressure of ≥ 5 cm H2O, who were admitted to 29 US centers between October 1, 2016, and April 30, 2017. The primary outcome was 28-day in-hospital mortality. Center variation in ventilator management, adjunctive therapy use, and mortality also were assessed.A total of 2,466 patients were enrolled. Median baseline Pao2 to Fio2 ratio was 105 (interquartile range, 78.0-129.0). In-hospital 28-day mortality was 40.7%. Initial adherence to lung protective ventilation (LPV; tidal volume, ≤ 6.5 mL/kg predicted body weight; plateau pressure, or when unavailable, peak inspiratory pressure, ≤ 30 mm H2O) was 31.4% and varied between centers (0%-65%), as did rates of adjunctive therapy use (27.1%-96.4%), methods used (neuromuscular blockade, prone positioning, systemic steroids, pulmonary vasodilators, and extracorporeal support), and mortality (16.7%-73.3%). Center standardized mortality ratios (SMRs), calculated using baseline patient-level characteristics to derive expected mortality rate, ranged from 0.33 to 1.98. Of the treatment-level factors explored, only center adherence to early LPV was correlated with SMR.Substantial center-to-center variability exists in ARDS management, suggesting that further opportunities for improving ARDS outcomes exist. Early adherence to LPV was associated with lower center mortality and may be a surrogate for overall quality of care processes. Future collaboration is needed to identify additional treatment-level factors influencing center-level outcomes.ClinicalTrials.gov; No.: NCT03021824; URL: www.clinicaltrials.gov.
DOI: 10.1097/aln.0000000000004256
2022
Cited 38 times
Mechanical Power during General Anesthesia and Postoperative Respiratory Failure: A Multicenter Retrospective Cohort Study
Mechanical power during ventilation estimates the energy delivered to the respiratory system through integrating inspiratory pressures, tidal volume, and respiratory rate into a single value. It has been linked to lung injury and mortality in the acute respiratory distress syndrome, but little evidence exists regarding whether the concept relates to lung injury in patients with healthy lungs. This study hypothesized that higher mechanical power is associated with greater postoperative respiratory failure requiring reintubation in patients undergoing general anesthesia.In this multicenter, retrospective study, 230,767 elective, noncardiac adult surgical out- and inpatients undergoing general anesthesia between 2008 and 2018 at two academic hospital networks in Boston, Massachusetts, were included. The risk-adjusted association between the median intraoperative mechanical power, calculated from median values of tidal volume (Vt), respiratory rate (RR), positive end-expiratory pressure (PEEP), plateau pressure (Pplat), and peak inspiratory pressure (Ppeak), using the following formula: mechanical power (J/min) = 0.098 × RR × Vt × (PEEP + ½[Pplat - PEEP] + [Ppeak - Pplat]), and postoperative respiratory failure requiring reintubation within 7 days, was assessed.The median intraoperative mechanical power was 6.63 (interquartile range, 4.62 to 9.11) J/min. Postoperative respiratory failure occurred in 2,024 (0.9%) patients. The median (interquartile range) intraoperative mechanical power was higher in patients with postoperative respiratory failure than in patients without (7.67 [5.64 to 10.11] vs. 6.62 [4.62 to 9.10] J/min; P < 0.001). In adjusted analyses, a higher mechanical power was associated with greater odds of postoperative respiratory failure (adjusted odds ratio, 1.31 per 5 J/min increase; 95% CI, 1.21 to 1.42; P < 0.001). The association between mechanical power and postoperative respiratory failure was robust to additional adjustment for known drivers of ventilator-induced lung injury, including tidal volume, driving pressure, and respiratory rate, and driven by the dynamic elastic component (adjusted odds ratio, 1.35 per 5 J/min; 95% CI, 1.05 to 1.73; P = 0.02).Higher mechanical power during ventilation is statistically associated with a greater risk of postoperative respiratory failure requiring reintubation.
DOI: 10.1016/j.chest.2021.09.024
2022
Cited 35 times
Study of Alteplase for Respiratory Failure in SARS-CoV-2 COVID-19
Pulmonary vascular microthrombi are a proposed mechanism of COVID-19 respiratory failure. We hypothesized that early administration of tissue plasminogen activator (tPA) followed by therapeutic heparin would improve pulmonary function in these patients.Does tPA improve pulmonary function in severe COVID-19 respiratory failure, and is it safe?Adults with COVID-19-induced respiratory failure were randomized from May14, 2020 through March 3, 2021, in two phases. Phase 1 (n = 36) comprised a control group (standard-of-care treatment) vs a tPA bolus (50-mg tPA IV bolus followed by 7 days of heparin; goal activated partial thromboplastin time [aPTT], 60-80 s) group. Phase 2 (n = 14) comprised a control group vs a tPA drip (50-mg tPA IV bolus, followed by tPA drip 2 mg/h plus heparin 500 units/h over 24 h, then heparin to maintain aPTT of 60-80 s for 7 days) group. Patients were excluded from enrollment if they had not undergone a neurologic examination or cross-sectional brain imaging within the previous 4.5 h to rule out stroke and potential for hemorrhagic conversion. The primary outcome was Pao2 to Fio2 ratio improvement from baseline at 48 h after randomization. Secondary outcomes included Pao2 to Fio2 ratio improvement of > 50% or Pao2 to Fio2 ratio of ≥ 200 at 48 h (composite outcome), ventilator-free days (VFD), and mortality.Fifty patients were randomized: 17 in the control group and 19 in the tPA bolus group in phase 1 and eight in the control group and six in the tPA drip group in phase 2. No severe bleeding events occurred. In the tPA bolus group, the Pao2 to Fio2 ratio values were significantly (P < .017) higher than baseline at 6 through 168 h after randomization; the control group showed no significant improvements. Among patients receiving a tPA bolus, the percent change of Pao2 to Fio2 ratio at 48 h (16.9% control [interquartile range (IQR), -8.3% to 36.8%] vs 29.8% tPA bolus [IQR, 4.5%-88.7%]; P = .11), the composite outcome (11.8% vs 47.4%; P = .03), VFD (0.0 [IQR, 0.0-9.0] vs 12.0 [IQR, 0.0-19.0]; P = .11), and in-hospital mortality (41.2% vs 21.1%; P = .19) did not reach statistically significant differences when compared with those of control participants. The patients who received a tPA drip did not experience benefit.The combination of tPA bolus plus heparin is safe in severe COVID-19 respiratory failure. A phase 3 study is warranted given the improvements in oxygenation and promising observations in VFD and mortality.ClinicalTrials.gov; No.: NCT04357730; URL: www.gov.
DOI: 10.1186/s40560-023-00662-7
2023
Cited 11 times
Mechanical power and 30-day mortality in mechanically ventilated, critically ill patients with and without Coronavirus Disease-2019: a hospital registry study
Previous studies linked a high intensity of ventilation, measured as mechanical power, to mortality in patients suffering from "classic" ARDS. By contrast, mechanically ventilated patients with a diagnosis of COVID-19 may present with intact pulmonary mechanics while undergoing mechanical ventilation for longer periods of time. We investigated whether an association between higher mechanical power and mortality is modified by a diagnosis of COVID-19.This retrospective study included critically ill, adult patients who were mechanically ventilated for at least 24 h between March 2020 and December 2021 at a tertiary healthcare facility in Boston, Massachusetts. The primary exposure was median mechanical power during the first 24 h of mechanical ventilation, calculated using a previously validated formula. The primary outcome was 30-day mortality. As co-primary analysis, we investigated whether a diagnosis of COVID-19 modified the primary association. We further investigated the association between mechanical power and days being alive and ventilator free and effect modification of this by a diagnosis of COVID-19. Multivariable logistic regression, effect modification and negative binomial regression analyses adjusted for baseline patient characteristics, severity of disease and in-hospital factors, were applied.1,737 mechanically ventilated patients were included, 411 (23.7%) suffered from COVID-19. 509 (29.3%) died within 30 days. The median mechanical power during the first 24 h of ventilation was 19.3 [14.6-24.0] J/min in patients with and 13.2 [10.2-18.0] J/min in patients without COVID-19. A higher mechanical power was associated with 30-day mortality (ORadj 1.26 per 1-SD, 7.1J/min increase; 95% CI 1.09-1.46; p = 0.002). Effect modification and interaction analysis did not support that this association was modified by a diagnosis of COVID-19 (95% CI, 0.81-1.38; p-for-interaction = 0.68). A higher mechanical power was associated with a lower number of days alive and ventilator free until day 28 (IRRadj 0.83 per 7.1 J/min increase; 95% CI 0.75-0.91; p < 0.001, adjusted risk difference - 2.7 days per 7.1J/min increase; 95% CI - 4.1 to - 1.3).A higher mechanical power is associated with elevated 30-day mortality. While patients with COVID-19 received mechanical ventilation with higher mechanical power, this association was independent of a concomitant diagnosis of COVID-19.
DOI: 10.1164/rccm.202307-1168oc
2024
Adjustments of Ventilator Parameters during Operating Room–to–ICU Transition and 28-Day Mortality
Rationale: Lung-protective mechanical ventilation strategies have been proven beneficial in the operating room (OR) and the ICU. However, differential practices in ventilator management persist, often resulting in adjustments of ventilator parameters when transitioning patients from the OR to the ICU. Objectives: To characterize patterns of ventilator adjustments during the transition of mechanically ventilated surgical patients from the OR to the ICU and assess their impact on 28-day mortality. Methods: Hospital registry study including patients undergoing general anesthesia with continued, controlled mechanical ventilation in the ICU between 2008 and 2022. Ventilator parameters were assessed 1 hour before and 6 hours after the transition. Measurements and Main Results: Of 2,103 patients, 212 (10.1%) died within 28 days. Upon OR-to-ICU transition, VT and driving pressure decreased (-1.1 ml/kg predicted body weight [IQR, -2.0 to -0.2]; P < 0.001; and -4.3 cm H2O [-8.2 to -1.2]; P < 0.001). Concomitantly, respiratory rates increased (+5.0 breaths/min [2.0 to 7.5]; P < 0.001), resulting overall in slightly higher mechanical power (MP) in the ICU (+0.7 J/min [-1.9 to 3.0]; P < 0.001). In adjusted analysis, increases in MP were associated with a higher 28-day mortality rate (adjusted odds ratio, 1.10; 95% confidence interval, 1.06-1.14; P < 0.001; adjusted risk difference, 0.7%; 95% confidence interval, 0.4-1.0, both per 1 J/min). Conclusion: During transition of mechanically ventilated patients from the OR to the ICU, ventilator adjustments resulting in higher MP were associated with a greater risk of 28-day mortality.
DOI: 10.1097/01.ccm.0000241159.18620.ab
2006
Cited 123 times
When is critical care medicine cost-effective? A systematic review of the cost-effectiveness literature*
Receiving care in an intensive care unit can greatly influence patients' survival and quality of life. Such treatments can, however, be extremely resource intensive. Therefore, it is increasingly important to understand the costs and consequences associated with interventions aimed at reducing mortality and morbidity of critically ill patients. Cost-effectiveness analyses (CEAs) have become increasingly common to aid decisions about the allocation of scarce healthcare resources.To identify published original CEAs presenting cost/quality-adjusted life year or cost/life-year ratios for treatments used in intensive care units, to summarize the results in an accessible format, and to identify areas in critical care medicine that merit further economic evaluation.We conducted a systematic search of the English-language literature for original CEAs of critical care interventions published from 1993 through 2003. We collected data on the target population, therapy or program, study results, analytic methods employed, and the cost-effectiveness ratios presented.We identified 19 CEAs published through 2003 with 48 cost-effectiveness ratios pertaining to treatment of severe sepsis, acute respiratory failure, and general critical care interventions. These ratios ranged from cost saving to 958,423 US dollars/quality-adjusted life year and from 1,150 to 575,054 US dollars/life year gained. Many studies reported favorable cost-effectiveness profiles (i.e., below 50,000 US dollars/life year or quality-adjusted life year).Specific interventions such as activated protein C for patients with severe sepsis have been shown to provide good value for money. However, overall there is a paucity of CEA literature on the management of the critically ill, and further high-quality CEA is needed. In particular, research should focus on costly interventions such as 24-hr intensivist availability, early goal-directed therapy, and renal replacement therapy. Recent guidelines for the conduct of CEAs in critical care may increase the number and improve the quality of future CEAs.
DOI: 10.1097/ccm.0b013e318168f649
2008
Cited 112 times
The costs and cost-effectiveness of an integrated sepsis treatment protocol
Sepsis is associated with high mortality and treatment costs. International guidelines recommend the implementation of integrated sepsis protocols; however, the true cost and cost-effectiveness of these are unknown.To assess the cost-effectiveness of an integrated sepsis protocol, as compared with conventional care.Prospective cohort study of consecutive patients presenting with septic shock and enrolled in the institution's integrated sepsis protocol. Clinical and economic outcomes were compared with a historical control cohort.Beth Israel Deaconess Medical Center.Overall, 79 patients presenting to the emergency department with septic shock in the treatment cohort and 51 patients in the control group.An integrated sepsis treatment protocol incorporating empirical antibiotics, early goal-directed therapy, intensive insulin therapy, lung-protective ventilation, and consideration for drotrecogin alfa and steroid therapy.In-hospital treatment costs were collected using the hospital's detailed accounting system. The cost-effectiveness analysis was performed from the perspective of the healthcare system using a lifetime horizon. The primary end point for the cost-effectiveness analysis was the incremental cost per quality-adjusted life year gained.Mortality in the treatment group was 20.3% vs. 29.4% in the control group (p = .23). Implementing an integrated sepsis protocol resulted in a mean increase in cost of approximately $8,800 per patient, largely driven by increased intensive care unit length of stay. Life expectancy and quality-adjusted life years were higher in the treatment group; 0.78 and 0.54, respectively. The protocol was associated with an incremental cost of $11,274 per life-year saved and a cost of $16,309 per quality-adjusted life year gained.In patients with septic shock, an integrated sepsis protocol, although not cost-saving, appears to be cost-effective and compares very favorably to other commonly delivered acute care interventions.
DOI: 10.1097/01.ccm.0000262385.95721.cc
2007
Cited 111 times
Simple triage scoring system predicting death and the need for critical care resources for use during epidemics
Objectives: In the event of pandemic influenza, the number of critically ill victims will likely overwhelm critical care capacity. To date, no standardized method for allocating scarce resources when the number of patients in need far exceeds capacity exists. We sought to derive and validate such a triage scheme. Design: Retrospective analysis of prospectively collected data. Setting: Emergency departments of two urban tertiary care hospitals. Patients: Three separate cohorts of emergency department patients with suspected infection, comprising a total of 5,133 patients. Interventions: None. Measurements: A triage decision rule for use in an epidemic was developed using only those vital signs and patient characteristics that were readily available at initial presentation to the emergency department. The triage schema was derived from a cohort at center 1, validated on a second cohort from center 1, and then validated on a third cohort of patients from center 2. The primary outcome for the analysis was in-hospital mortality. Secondary outcomes were intensive care unit admission and use of mechanical ventilation. Main Results: Multiple logistic regression demonstrated the following as independent predictors of death: a) age of >65 yrs, b) altered mental status, c) respiratory rate of >30 breaths/min, d) low oxygen saturation, and e) shock index of >1 (heart rate > blood pressure). This model had an area under the receiver operating characteristic curve of 0.80 in the derivation set and 0.74 and 0.76 in the validation sets. When converted to a simple rule assigning 1 point per covariate, the discrimination of the model remained essentially unchanged. The model was equally effective at predicting need for intensive care unit admission and mechanical ventilation. Conclusions: If, as expected, patient demand far exceeds the capability to provide critical care services in an epidemic, a fair and just system to allocate limited resources will be essential. The triage rule we have developed can serve as an initial guide for such a process.
DOI: 10.1378/chest.06-1247
2007
Cited 110 times
Diagnostic and Prognostic Utility of Brain Natriuretic Peptide in Subjects Admitted to the ICU With Hypoxic Respiratory Failure Due to Noncardiogenic and Cardiogenic Pulmonary Edema
Background Brain natriuretic peptide (BNP) is useful in diagnosing congestive heart failure (CHF) in patients presenting in the emergency department with acute dyspnea. We prospectively tested the utility of BNP for discriminating ARDS vs cardiogenic pulmonary edema (CPE). Methods We enrolled ICU patients with acute hypoxemic respiratory failure and bilateral pulmonary infiltrates who were undergoing right-heart catheterization (RHC) to aid in diagnosis. Patients with acute coronary syndrome, end-stage renal disease, recent coronary artery bypass graft surgery, or preexisting left ventricular ejection fraction ≤ 30% were excluded. BNP was measured at RHC. Two intensivists independently reviewed the records to determine the final diagnosis. Results Eighty patients were enrolled. Median BNP was 325 pg/mL (interquartile range [IQR], 82 to 767 pg/mL) in acute lung injury/ARDS patients, vs 1,260 pg/mL (IQR, 541 to 2,020 pg/mL) in CPE patients (p = 0.0001). The correlation between BNP and pulmonary capillary wedge pressure was modest (r = 0.27, p = 0.02). BNP offered good discriminatory performance for the final diagnosis (C-statistic, 0.80). At a cut point ≤ 200 pg/mL, BNP provided specificity of 91% for ARDS. At a cut point ≥ 1,200 pg/mL, BNP had a specificity of 92% for CPE. Higher levels of BNP were associated with a decreased odds for ARDS (odds ratio, 0.4 per log increase; p = 0.007) after adjustment for age, history of CHF, and right atrial pressure. BNP was associated with in-hospital mortality (p = 0.03) irrespective of the final diagnosis and independent of APACHE (acute physiology and chronic health evaluation) II score. Conclusion In ICU patients with hypoxemic respiratory failure, BNP appears useful in excluding CPE and identifying patients with a high probability of ARDS, and was associated with mortality in patients with both ARDS and CPE. Larger studies are necessary to validate these findings. Brain natriuretic peptide (BNP) is useful in diagnosing congestive heart failure (CHF) in patients presenting in the emergency department with acute dyspnea. We prospectively tested the utility of BNP for discriminating ARDS vs cardiogenic pulmonary edema (CPE). We enrolled ICU patients with acute hypoxemic respiratory failure and bilateral pulmonary infiltrates who were undergoing right-heart catheterization (RHC) to aid in diagnosis. Patients with acute coronary syndrome, end-stage renal disease, recent coronary artery bypass graft surgery, or preexisting left ventricular ejection fraction ≤ 30% were excluded. BNP was measured at RHC. Two intensivists independently reviewed the records to determine the final diagnosis. Eighty patients were enrolled. Median BNP was 325 pg/mL (interquartile range [IQR], 82 to 767 pg/mL) in acute lung injury/ARDS patients, vs 1,260 pg/mL (IQR, 541 to 2,020 pg/mL) in CPE patients (p = 0.0001). The correlation between BNP and pulmonary capillary wedge pressure was modest (r = 0.27, p = 0.02). BNP offered good discriminatory performance for the final diagnosis (C-statistic, 0.80). At a cut point ≤ 200 pg/mL, BNP provided specificity of 91% for ARDS. At a cut point ≥ 1,200 pg/mL, BNP had a specificity of 92% for CPE. Higher levels of BNP were associated with a decreased odds for ARDS (odds ratio, 0.4 per log increase; p = 0.007) after adjustment for age, history of CHF, and right atrial pressure. BNP was associated with in-hospital mortality (p = 0.03) irrespective of the final diagnosis and independent of APACHE (acute physiology and chronic health evaluation) II score. In ICU patients with hypoxemic respiratory failure, BNP appears useful in excluding CPE and identifying patients with a high probability of ARDS, and was associated with mortality in patients with both ARDS and CPE. Larger studies are necessary to validate these findings.
DOI: 10.1213/01.ane.0000147705.94738.31
2005
Cited 102 times
Perioperative and Long-Term Morbidity and Mortality After Above-Knee and Below-Knee Amputations in Diabetics and Nondiabetics
In Brief We performed a retrospective review of a vascular surgery quality assurance database to evaluate the perioperative and long-term morbidity and mortality of above-knee amputations (AKA, n = 234) and below-knee amputations (BKA, n = 720) and to examine the effect of diabetes mellitus (DM) (181 of AKA and 606 of BKA patients). All patients in the database who had AKA or BKA from 1990 to May 2001 were included in the study. Perioperative 30-day cardiac morbidity and mortality and 3-yr and 10-yr mortality after AKA or BKA were assessed. The effect of DM on 30-day cardiac outcome was assessed by multivariate logistic regression and the effect on long-term survival was assessed by Cox regression analysis. The perioperative cardiac event rate (cardiac death or nonfatal myocardial infarction) was at least 6.8% after AKA and at most 3.6% after BKA. Median survival was significantly less after AKA (20 mo) than BKA (52 mo) (P < 0.001). DM was not a significant predictor of perioperative 30-day mortality (odds ratio, 0.76 [0.39–1.49]; P = 0.43) or 3-yr survival (Hazard ratio, 1.03 [0.86–1.24]; P = 0.72) but predicted 10-yr mortality (Hazard ratio, 1.34 [1.04–1.73]; P = 0.026). Significant predictors of the 30-day perioperative mortality were the site of amputation (odds ratio, 4.35 [2.56–7.14]; P < 0.001) and history of renal insufficiency (odds ratio, 2.15 [1.13–4.08]; P = 0.019). AKA should be triaged as a high-risk surgery while BKA is an intermediate-risk surgery. Long-term survival after AKA or BKA is poor, regardless of the presence of DM. IMPLICATIONS: Above-knee amputations are a high-risk surgery, whereas below-knee amputations are an intermediate-risk surgery, as defined by the American College of Cardiology/American Heart Association guidelines. Their perioperative mortality is independent of diabetes mellitus but is predicted by renal insufficiency and the site of amputation.
DOI: 10.1097/ccm.0b013e318225757f
2011
Cited 95 times
Association of prehospitalization aspirin therapy and acute lung injury: Results of a multicenter international observational study of at-risk patients*
To evaluate the association between prehospitalization aspirin therapy and incident acute lung injury in a heterogeneous cohort of at-risk medical patients.This is a secondary analysis of a prospective multicenter international cohort investigation.Multicenter observational study including 20 US hospitals and two hospitals in Turkey.Consecutive, adult, nonsurgical patients admitted to the hospital with at least one major risk factor for acute lung injury.None.Baseline characteristics and acute lung injury risk factors/modifiers were identified. The presence of aspirin therapy and the propensity to receive this therapy were determined. The primary outcome was acute lung injury during hospitalization. Secondary outcomes included intensive care unit and hospital mortality and intensive care unit and hospital length of stay. Twenty-two hospitals enrolled 3855 at-risk patients over a 6-month period. Nine hundred seventy-six (25.3%) were receiving aspirin at the time of hospitalization. Two hundred forty (6.2%) patients developed acute lung injury. Univariate analysis noted a reduced incidence of acute lung injury in those receiving aspirin therapy (odds ratio [OR], 0.65; 95% confidence interval [CI], 0.46-0.90; p = .010). This association was attenuated in a stratified analysis based on deciles of aspirin propensity scores (Cochran-Mantel-Haenszel pooled OR, 0.70; 95% CI, 0.48-1.03; p = .072).After adjusting for the propensity to receive aspirin therapy, no statistically significant associations between prehospitalization aspirin therapy and acute lung injury were identified; however, a prospective clinical trial to further evaluate this association appears warranted.
DOI: 10.1097/aln.0b013e3181a1005b
2009
Cited 94 times
Continuous Perioperative Insulin Infusion Decreases Major Cardiovascular Events in Patients Undergoing Vascular Surgery
A growing body of evidence suggests that hyperglycemia is an independent predictor of increased cardiovascular risk. Aggressive glycemic control in the intensive care decreases mortality. The benefit of glycemic control in noncardiac surgery is unknown.In a single-center, prospective, unblinded, active-control study, 236 patients were randomly assigned to continuous insulin infusion (target glucose 100-150 mg/dl) or to a standard intermittent insulin bolus (treat glucose > 150 mg/dl) in patients undergoing peripheral vascular bypass, abdominal aortic aneurysm repair, or below- or above-knee amputation. The treatments began at the start of surgery and continued for 48 h. The primary endpoint was a composite of all-cause death, myocardial infarction, and acute congestive heart failure. The secondary endpoints were blood glucose concentrations, rates of hypoglycemia (< 60 mg/dl) and hyperglycemia (> 150 mg/dl), graft failure or reintervention, wound infection, acute renal insufficiency, and duration of stay.The groups were well balanced for baseline characteristics, except for older age in the intervention group. There was a significant reduction in primary endpoint (3.5%) in the intervention group compared with the control group (12.3%) (relative risk, 0.29; 95% confidence interval, 0.10-0.83; P = 0.013). The secondary endpoints were similar. Hypoglycemia occurred in 8.8% of the intervention group compared with 4.1% of the control group (P = 0.14). Multivariate analysis demonstrated that continuous insulin infusion was a negative independent predictor (odds ratio, 0.28; 95% confidence interval, 0.09-0.87; P = 0.027), whereas previous coronary artery disease was a positive predictor of adverse events.Continuous insulin infusion reduces perioperative myocardial infarction after vascular surgery.
DOI: 10.1378/chest.07-2690
2008
Cited 90 times
Definitive Care for the Critically Ill During a Disaster: A Framework for Optimizing Critical Care Surge Capacity
BackgroundPlausible disasters may yield hundreds or thousands of critically ill victims. However, most countries, including those with widely available critical care services, lack sufficient specialized staff, medical equipment, and ICU space to provide timely, usual critical care for a large influx of additional patients. Shifting critical care disaster preparedness efforts to augment limited, essential critical care (emergency mass critical care [EMCC]), rather than to marginally increase unrestricted, individual-focused critical care may provide many additional people with access to life-sustaining interventions. In 2007, in response to the increasing concern over a severe influenza pandemic, the Task Force on Mass Critical Care (hereafter called the Task Force) convened to suggest the essential critical care therapeutics and interventions for EMCC.Task Force suggestionsEMCC should include the following: (1) mechanical ventilation, (2) IV fluid resuscitation, (3) vasopressor administration, (4) medication administration for specific disease states (eg, antimicrobials and antidotes), (5) sedation and analgesia, and (6) select practices to reduce adverse consequences of critical illness and critical care delivery. Also, all hospitals with ICUs should prepare to deliver EMCC for a daily critical care census at three times their usual ICU capacity for up to 10 days.DiscussionBy using the Task Force suggestions for EMCC, communities may better prepare to deliver augmented critical care in response to disasters. In light of current mass critical care data limitations, the Task Force suggestions were developed to guide preparedness but are not intended as strict policy mandates. Additional research is required to evaluate EMCC and revise the strategy as warranted. Plausible disasters may yield hundreds or thousands of critically ill victims. However, most countries, including those with widely available critical care services, lack sufficient specialized staff, medical equipment, and ICU space to provide timely, usual critical care for a large influx of additional patients. Shifting critical care disaster preparedness efforts to augment limited, essential critical care (emergency mass critical care [EMCC]), rather than to marginally increase unrestricted, individual-focused critical care may provide many additional people with access to life-sustaining interventions. In 2007, in response to the increasing concern over a severe influenza pandemic, the Task Force on Mass Critical Care (hereafter called the Task Force) convened to suggest the essential critical care therapeutics and interventions for EMCC. EMCC should include the following: (1) mechanical ventilation, (2) IV fluid resuscitation, (3) vasopressor administration, (4) medication administration for specific disease states (eg, antimicrobials and antidotes), (5) sedation and analgesia, and (6) select practices to reduce adverse consequences of critical illness and critical care delivery. Also, all hospitals with ICUs should prepare to deliver EMCC for a daily critical care census at three times their usual ICU capacity for up to 10 days. By using the Task Force suggestions for EMCC, communities may better prepare to deliver augmented critical care in response to disasters. In light of current mass critical care data limitations, the Task Force suggestions were developed to guide preparedness but are not intended as strict policy mandates. Additional research is required to evaluate EMCC and revise the strategy as warranted. The severe acute respiratory syndrome epidemic of 2002–2003, recent natural disasters, burgeoning concern for intentional catastrophes, and the looming threat of a severe influenza pandemic have stimulated much recent debate about how to care for a surge of critically ill people.1Hick JL O'Laughlin DT Concept of operations for triage of mechanical ventilation in an epidemic.Acad Emerg Med. 2006; 13: 223-229Crossref PubMed Google Scholar2Gomersall CD Tai DY Loo S et al.Expanding ICU facilities in an epidemic: recommendations based on experience from the SARS epidemic in Hong Kong and Singapore.Intensive Care Med. 2006; 32: 1004-1013Crossref PubMed Scopus (68) Google Scholar3Roccaforte JD Cushman JG Disaster preparation and management for the intensive care unit.Curr Opin Crit Care. 2002; 8: 607-615Crossref PubMed Scopus (20) Google Scholar4Rubinson L Nuzzo JB Talmor DS et al.Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: recommendations of the Working Group on Emergency Mass Critical Care.Crit Care Med. 2005; 33: 2393-2403Crossref PubMed Scopus (146) Google Scholar5Booth CM Stewart TE Severe acute respiratory syndrome and critical care medicine: the Toronto experience.Crit Care Med. 2005; 33: S53-S60Crossref PubMed Scopus (57) Google Scholar6Rubinson L O'Toole T Critical care during epidemics.Crit Care. 2005; 9: 311-313Crossref PubMed Scopus (29) Google Scholar7Sariego J CCATT: a Military Model for Civilian Disaster Management.Disaster Manag Response. 2006; 4: 114-117Abstract Full Text Full Text PDF PubMed Scopus (26) Google Scholar8Farmer JC Carlton Jr, PK Providing critical care during a disaster: the interface between disaster response agencies and hospitals.Crit Care Med. 2006; 34: S56-S59Crossref PubMed Scopus (56) Google Scholar9Hawryluck L Lapinsky SE Stewart TE Clinical review: SARS; lessons in disaster management.Crit Care. 2005; 9: 384-389Crossref PubMed Scopus (53) Google Scholar10Anderson TA Hart GK Kainer MA Pandemic influenza-implications for critical care resources in Australia and New Zealand.J Crit Care. 2003; 18: 173-180Abstract Full Text Full Text PDF PubMed Scopus (26) Google Scholar11Menon DK Taylor BL Ridley SA Modelling the impact of an influenza pandemic on critical care services in England.Anaesthesia. 2005; 60: 952-954Crossref PubMed Scopus (57) Google Scholar12Kvetan V Critical care medicine, terrorism and disasters: are we ready?.Crit Care Med. 1999; 27: 873-874Crossref PubMed Scopus (7) Google Scholar Most countries, though, including those with widely available critical care services and investment in disaster preparedness, lack sufficient specialized staff, medical equipment, and ICU space to provide timely, usual critical care for a large influx of additional patients (see "Definitive Care for the Critically Ill During a Disaster: Current Capabilities and Limitations"). If a disaster yielded hundreds or thousands of critically ill victims, only a handful of people would be likely to have access to usual critical care services. The remaining victims might receive chaotically assigned therapies or even have to forgo critical care entirely. Provision of essential rather than limitless critical care will be needed to allow many additional community members to have access to key life-sustaining interventions during disasters. This is one of several documents prepared by the Task Force for Mass Critical Care (hereafter referred to as the Task Force) [see the Executive Summary, "Summary of Suggestions From the Task Force on Mass Casualty Critical Care Summit"]. This document suggests a key set of critical care therapeutics and interventions for responding to mass critical illness. Additionally, this document offers benchmarks for critical care surge capacity, a general approach to optimizing resource availability, and criteria for when to use essential rather than usual critical care in response to disasters. The Task Force convened to update and further develop emergency mass critical care (EMCC), a conceptual framework for critical care surge capacity first put forth in 2005.4Rubinson L Nuzzo JB Talmor DS et al.Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: recommendations of the Working Group on Emergency Mass Critical Care.Crit Care Med. 2005; 33: 2393-2403Crossref PubMed Scopus (146) Google Scholar Mass critical care events require a transition from individual patient-focused critical care to a population-oriented approach intended to provide the best possible outcomes for a large cohort of critical care patients. EMCC was developed as a framework for such a transition. EMCC is a set of changes from everyday critical care staffing, medical equipment, and treatment spaces (Table 1),4Rubinson L Nuzzo JB Talmor DS et al.Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: recommendations of the Working Group on Emergency Mass Critical Care.Crit Care Med. 2005; 33: 2393-2403Crossref PubMed Scopus (146) Google Scholar which were developed to maximize survival for the overall critically ill population in need and, at the same time, to minimize the adverse outcomes that might occur as a result of changes in usual practice.13Phillips SJ Knebel A Providing mass medical care with scarce resources: a community planning guide. 2006; (Agency for Healthcare Research and Quality. Washington DC:)Google Scholar Still, some individual patients may have worse outcomes when receiving EMCC instead of usual critical care services. Hence, EMCC should be used only for disasters when numbers of critically ill patients far surpass the capability of traditional, available critical care capacity. In other words, EMCC should be considered for disasters when, without modifying usual critical care practices, shortfalls in capacity will lead to many victims being expected to die with random, limited, or no access to potentially life-sustaining critical care interventions.Table 1.Original 2005 Recommendations for Hospital Planning and Response for EMCC*2005 Working Group on Emergency Mass Critical Care. Adapted from Rubinson et al, Crit Care Med 2005; 33: 2393–2403; PPE = personal protective equipment.Modifying usual standards of care Hospitals develop a set of EMCC practices that could be implemented in the event critical care capacity of that hospital is exceeded.Decisions regarding which critical care interventions should be provided: essential elements of critical care To ensure the availability of essential critical care interventions, the Working Group recommends that hospitals give priority to interventions that fulfill the following criteria: (1) interventions that have been shown or are deemed by critical care expert best professional judgment to improve survival, and without which death is likely; (2) interventions that do not require extraordinarily expensive equipment; and (3) interventions that can be implemented without consuming extensive staff or hospital resources. Hospitals should plan to be able to deliver the following during EMCC: basic mode(s) of mechanical ventilation, hemodynamic support, antibiotic or other disease-specific countermeasure therapy, and a small set of prophylactic interventions that are recognized to reduce the serious adverse consequences of critical illness. Hospitals should plan to be able to administer IV fluid resuscitation and vasopressors to large numbers of hemodynamically unstable victims, and stockpile sufficient equipment to do this without relying on external resources for at least the first 48 h of the hospital medical response. Hospitals should plan to provide at least two widely accepted prophylactic interventions that are used every day in critical care: (1) maintaining the head of a mechanically ventilated patient's bed at a 45° angle to prevent ventilator-associated pneumonia, and (2) thromboembolism prophylaxis.Decisions regarding who receives critical care services If there are limited hospital resources and many critically ill patients in need, triage decisions regarding the provision of critical care should be guided by the principle of seeking to help the greatest number of people survive the crisis. This would include patients already receiving ICU care who are not casualties of an attack.Who should provide EMCC? In the event that critical care needs in a hospital cannot be met by intensivists and critical care nurses, usual ICU staffing should be modified to include nonintensivist clinicians and non-critical care nurses, using a two-tiered staffing model. When there are inadequate numbers of intensivists, hospitals should plan for nonintensivists to manage approximately six critically ill patients each and to have intensivists coordinate the efforts of up to four nonintensivists. If a hospital has insufficient numbers of critical care nurses to appropriately manage patients, non-critical care nurses should be assigned primary responsibility for patient assessment, nursing care documentation, administration of medications, and bedside care (eg, head of bed at 45°, moving patient to prevent pressure ulcers), and critical care nurses should advise non-critical care nurses on critical care issues such as vasopressor and sedation administration. If possible, a non-critical care nurse should be assigned to no more than two critically ill patients, and up to three non-critical care nurses would work in collaboration with one critical care nurse. Bioterrorism training for non-critical care practitioners should include basic principles of critical care management.Infection control for EMCC Hospitals should develop pre-event plans to augment usual or modified airborne infection isolation capacity for critically ill victims of a bioattack with a contagious pathogen. Hospitals should stockpile enough PPE to care for mass casualties of a bioterrorist attack for up to 48 h. Also, all hospital clinical staff should receive initial and periodic training on principles of health-care delivery using PPE.Where should EMCC be located? When traditional critical care capacity is full, additional critically ill patients should receive care in non-ICU hospital rooms that are concentrated on specific hospital wards or floors. Hospitals should plan to be able to measure oxygen saturation, temperature, BP, and urine output for the victims of bioattacks in EMCC conditions.Learning during EMCC Hospitals should have information technology capabilities for analyzing clinical data for patients receiving EMCC and for quickly sharing new observations with a broader clinical community.What medications are needed for EMCC? Hospitals should develop a list of drugs to stockpile for up to a 48-h response to a mass casualty event, using selection criteria that include the following: likelihood the drug would be required for care of most patients; proven or generally accepted efficacy by most practitioners; cost; ease of administration; ability to rotate into the formulary of the hospital prior to expiration; and resources required for medication storage.* 2005 Working Group on Emergency Mass Critical Care. Adapted from Rubinson et al, Crit Care Med 2005; 33: 2393–2403; PPE = personal protective equipment. Open table in a new tab Given the increasing concern for an influenza pandemic, Task Force suggestions were developed with specific consideration of the anticipated circumstances of a severe pandemic. Nonetheless, the Task Force intends EMCC to be applicable for all hazards causing moderate or large surges in critically ill patients, as well as for those that compromise existing critical care infrastructure (see "Definitive Care for the Critically Ill During a Disaster: Current Capabilities and Limitations"). Even when additional specialized interventions (eg, burn care or renal replacement therapy for crush syndrome) are required (Table 2),14Disaster management and the ABA plan.J Burn Care Rehabil. 2005; 26: 102-106Crossref PubMed Scopus (64) Google Scholar15Lameire N Mehta R Vanholder R et al.The organization and interventions of the ISN Renal Disaster Relief Task Force.Adv Ren Replace Ther. 2003; 10: 93-99Abstract Full Text Full Text PDF PubMed Scopus (28) Google Scholar16Sever MS Vanholder R Lameire N Management of crush-related injuries after disasters.N Engl J Med. 2006; 354: 1052-1063Crossref PubMed Scopus (265) Google Scholar17Sever MS Erek E Vanholder R et al.Renal replacement therapies in the aftermath of the catastrophic Marmara earthquake.Kidney Int. 2002; 62: 2264-2271Abstract Full Text Full Text PDF PubMed Scopus (57) Google Scholar18Haberal M Guidelines for dealing with disasters involving large numbers of extensive burns.Burns. 2006; 32: 933-939Abstract Full Text Full Text PDF PubMed Scopus (25) Google Scholar EMCC is still appropriate for the general, supportive critical care foundation these patients will need.Table 2.Task Force-Suggested Additions to EMCCCapability goals Every hospital with an ICU should plan and prepare to provide EMCC and should do so in coordination with regional hospital planning efforts. Hospitals with ICUs should plan and prepare to provide EMCC every day of the response for a critically ill patient census of at least 300% of usual ICU capacity. Hospitals should prepare to deliver EMCC for 10 d without sufficient external assistance.Critical care therapeutics and interventions EMCC should include: Mechanical ventilation IV fluid resuscitation Vasopressor administration Antidote or antimicrobial administration for specific disease processes, if applicable. Sedation and analgesia. Strategies to reduce adverse consequences of critical care and critical illness. Optimal therapeutics and interventions, such as renal replacement therapy and nutrition for patients unable to take food by mouth, if warranted by hospital or regional preference. Hospitals should have an additional 30% of disposable equipment available for EMCC to account for patient turnover (death or improvement no longer requiring critical care) during the 10-d response.Initiation and cessation All communities should develop a graded response plan for events across the spectrum from multiple casualty to catastrophic critical care events. These plans should clearly delineate what levels of modification of critical care practices are expected for the different surge requirements. Use of EMCC should be restricted to overwhelming mass critical care events. Open table in a new tab The Task Force believes that all critical care centers should be committed to preparing for and responding to disasters. EMCC planning and implementation, though, cannot occur in isolation from the rest of the preparedness and response efforts of the hospital. Individual hospitals, too, are cautioned against preparing in isolation, and are encouraged to coordinate with other local health-care entities because resource and planning obligations can be met more efficiently when shared among all local health-care institutions (health-care coalition;19Barbera J Macintyre A Medical surge capacity and capability: a management system for integrating medical and health resources during large-scale emergencies. 2004; (CNA Corporation. Alexandria, VA:)Google Scholar for this article, health-care coalition refers to an organization that coordinates local health-care entities; for communities without formal coalition organizations, the reader should consider the term coalition to refer to the loosely organized local health-care system entities together with the local public health organization). Critical care providers should therefore work with both hospital and coalition partners to ensure that critical care services are considered for and integrated into planning for health-care system surge capacity. This coordination of preparedness activities will allow for uniform implementation of altered critical care processes by all hospitals, when warranted during a disaster.4Rubinson L Nuzzo JB Talmor DS et al.Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: recommendations of the Working Group on Emergency Mass Critical Care.Crit Care Med. 2005; 33: 2393-2403Crossref PubMed Scopus (146) Google Scholar Hospitals cannot be expected to prepare for endless quantities of critically ill patients. Critical care surge capacity benchmarks must be defined. Guidance to date has remained elusive, though. Loosely derived benchmarks for mass casualty surge capacity have been previously promulgated (eg, triage, treat, and initially stabilize 500 victims with an infectious disease per 1 million people),20Closing the seams: developing an integrated approach to health system disaster preparedness. Price waterhouse Cooperwww.astho.org/pubs/ClosingtheSeams.pdf?PHPSESSID-bafb6adGoogle Scholar21Department Of Health And Human Services Interim Public Health and Healthcare Supplement to the National Preparedness Goal (NPG)http://www.hhs.gov/aspr/opeo/documents/npgs.htmlGoogle Scholar but they lack enough detail to translate into critical care surge capacity goals. Scientifically rigorous derivation of the benchmarks is desirable. The Task Force spoke with a number of modeling experts to see if accurate surge capacity goals could be developed across the range of plausible mass critical care events (eg, earthquakes, epidemics, chemical exposures). Owing to the limited historical data for such events and the numerous imprecise assumptions within the models, the Task Force was informed that the uncertainties currently limit even sophisticated models from confidently predicting critical care capacity goals. Even using tools such as Flu Surge, which is a publicly available model22Zhang X Meltzer MI Wortley PM FluSurge: a tool to estimate demand for hospital services during the next pandemic influenza.Med Decis Making. 2006; 26: 617-623Crossref PubMed Scopus (45) Google Scholar that can be used to predict critical care needs for an influenza pandemic, is fraught with limitations. If influenza (H5N1) becomes the strain to cause the next pandemic,23Abdel-Ghafar AN Chotpitayasunondh T Gao Z et al.Update on avian influenza A (H5N1) virus infection in humans.N Engl J Med. 2008; 358: 261-273Crossref PubMed Scopus (759) Google Scholar uncertainties regarding virulence once human- to-human transmission is sustained, the response to antivirals,24Yen HL Ilyushina NA Salomon R et al.Neuraminidase inhibitor-resistant recombinant A/Vietnam/1203/04 (H5N1) influenza viruses retain their replication efficiency and pathogenicityin vitroandin vivo.J Virol. 2007; 81: 12418-12426Crossref PubMed Scopus (143) Google Scholar25Hayden FG Antiviral resistance in influenza viruses-implications for management and pandemic response.N Engl J Med. 2006; 354: 785-788Crossref PubMed Scopus (167) Google Scholar the timeliness and effectiveness of a vaccine,26McKenna M. http://www.cidrap.umn.edu/cidrap/content/influenza/panflu/news/nov1507panvax.htmlGoogle Scholar and the impact of community mitigation27Markel H Lipman HB Navarro JA et al.Nonpharmaceutical interventions implemented by US cities during the 1918–1919 influenza pandemic.JAMA. 2007; 298: 644-654Crossref PubMed Scopus (532) Google Scholar28Barry JM Nonpharmaceutical interventions implemented during the 1918–1919 influenza pandemic.JAMA. 2007; 298: 2260-2261Crossref PubMed Scopus (11) Google Scholar all make estimating critical care need very difficult. Furthermore, the lack of a severe influenza pandemic since modern critical care became available limits the accuracy of extrapolating historical clinical descriptions to anticipated clinical resource requirements for the next pandemic.22Zhang X Meltzer MI Wortley PM FluSurge: a tool to estimate demand for hospital services during the next pandemic influenza.Med Decis Making. 2006; 26: 617-623Crossref PubMed Scopus (45) Google Scholar Thus, the Task Force believes that derivation of capacity goals from current models, no matter how sophisticated, offers no more defendable estimates than benchmarks derived empirically by expert consensus. A 100% increase in critical care capacity was considered by the Task Force to be insufficient for most regions to provide adequate regional critical care surge capacity for the major national planning scenarios (from the US Department of Homeland Security) likely to cause mass critical illness.29National planning scenarios: created for use in national, federal, state, and local homeland security preparedness activitieshttp://media.washingtonpost.com/wp-srv/nation/nationalsecurity/earlywarning/NationalPlanningScenariosApril2005.pdfGoogle Scholar At the same time, it seemed unrealistic to the Task Force to expect most or all of the US 3,600 to 4,440 nonfederal hospitals with an ICU to be able to comply with threefold or fourfold increases above baseline regional capacity30Haupt MT Bekes CE Brilli RJ et al.Guidelines on critical care services and personnel: recommendations based on a system of categorization of three levels of care.Crit Care Med. 2003; 31: 2677-2683Crossref PubMed Scopus (271) Google Scholar31Halpern NA Pastores SM Greenstein RJ Critical care medicine in the United States 1985–2000: an analysis of bed numbers, use, and costs.Crit Care Med. 2004; 32: 1254-1259Crossref PubMed Scopus (372) Google Scholar32Angus DC Shorr AF White A et al.Critical care delivery in the United States: distribution of services and compliance with Leapfrog recommendations.Crit Care Med. 2006; 34: 1016-1024Crossref PubMed Scopus (366) Google Scholar33Critical care units: a descriptive analysis. 2005; (Society of Critical Care Medicine. Des Plaines, IL:)Google Scholar (see "Definitive Care for the Critically Ill During a Disaster: Current Capabilities and Limitations"). In light of current uncertainties, the Task Force capacity benchmarks are intended to be used as suggestions for consideration rather than strict policy mandates. Also, the Task Force encourages future development of formal, quantitative methods for accurately determining critical care surge capacity goals. If these future methods are based on well-considered assumptions and utilize rigorous data, then the Task Force suggests that the later goals should usurp the current suggestions. Additional critical care capacity above the suggested benchmark may be required in geographic regions that (1) are at high risk for mass critical care events; (2) at baseline, have inadequate numbers of ICU beds for the population of their catchment area; or (3) are remote. In such regions, the increased capacity should similarly be accomplished through a health-care coalition when possible. Previously, national panels had recommended that hospitals plan to respond to disasters without federal medical assistance for up to 3 days.34Joint Commission on Accreditation of Healthcare Organizations: Healthcare at the crossroads: strategies for creating and sustaining community-wide emergency preparedness systemswww.jointcommission.org/NR/rdonlyres/9C8DE572-5D7A-4F28-AB84-3741EC82AF98/0/emergency_preparedness.pdfGoogle Scholar Events anticipated to cause mass critical illness, however, are likely to extend the time to arrival of sufficient external medical assistance or to completion of medical evacuation. When assistance does arrive, the immediate benefits for critically ill victims still may be inadequate because most of the deployable North American medical assets are not designed, staffed, or equipped for large-scale critical care response capability.4Rubinson L Nuzzo JB Talmor DS et al.Augmentation of hospital critical care capacity after bioterrorist attacks or epidemics: recommendations of the Working Group on Emergency Mass Critical Care.Crit Care Med. 2005; 33: 2393-2403Crossref PubMed Scopus (146) Google Scholar35Franco C Toner E Waldhorn R et al.The national disaster medical system: past, present, and suggestions for the future.Biosecur Bioterror. 2007; 5: 319-326Crossref PubMed Scopus (39) Google Scholar Additionally, medical evacuation capacity for critically ill patients is much less than for noncritical patients and is insufficient to immediately meet large critical care demands36Toner E Waldhorn R Franco C A national disaster medical system for the 21st century.Biosecur Bioterror. 2007; 5: 192-193Crossref PubMed Scopus (4) Google Scholar37Atlas and database of air medical serviceshttp://www.adamsairmed.org/pubs/AMTC07_poster.pdfGoogle Scholar (see "Definitive Care for the Critically Ill During a Disaster: Current Capabilities and Limitations"). Hence, hospitals should anticipate having to care for the critically ill longer than for other patients because of the challenges of large-scale critical care evacuation. These concerns are not just theoretical; Charity Hospital in New Orleans had to improvise care for days prior to complete evacuation of their critically ill patients in the wake of Hurricane Katrina.38deBoisblanc BP Black Hawk, please come down: reflections on a hospital's struggle to survive in the wake of Hurricane Katrina.Am J Respir Crit Care Med. 2005; 172: 1239-1240Crossref PubMed Scopus (27) Google Scholar The suggestion of a 10-day period is intended to ensure that life-sustaining care can be maintained throughout the entire period until rescue is completed. The Task Force believes that 10 days is a reasonable timeframe because victims' critical care needs are not expected to rapidly resolve for most scenarios (see "Definitive Care for the Critically Ill During a Disaster: Current Capabilities and Limitations"). Clinical syndromes similar to those anticipated for mass critical care (eg, ARDS) generally require critical care management for > 1 week.39Rubenfeld GD Caldwell E Peabody E et al.Incidence and outcomes of acute lung injury.N Engl J Med. 2005; 353: 1685-1693Crossref PubMed Scopus (3029) Google Scholar40Esteban A Anzueto A Frutos F et al.Characteristics and outcomes in adult patients receiving mechanical ventilation: a 28-day international study.JAMA. 2002; 287: 345-355Crossref PubMed Scopus (1354) Google Scholar Of course, the duration of a severe influenza pandemic wave may last much longer than 10 days in a community,28Barry JM Nonpharmaceutical interventions implemented during the 1918–1919 influenza pandemic.JAMA. 2007; 298: 2260-2261Crossref PubMed Sc
DOI: 10.1097/ccm.0b013e3182a66903
2014
Cited 79 times
When Policy Gets It Right
Objective: The Centers for Disease Control has recently proposed a major change in how ventilator-associated pneumonia is defined. This has profound implications for public reporting, reimbursement, and accountability measures for ICUs. We sought to provide evidence for or against this change by quantifying limitations of the national definition of ventilator-associated pneumonia that was in place until January 2013, particularly with regard to comparisons between, and ranking of, hospitals and ICUs. Design: A prospective survey of a nationally representative group of 43 hospitals, randomly selected from the American Hospital Association Guide (2009). Subjects classified six standardized vignettes of possible cases of ventilator-associated pneumonia as pneumonia or no pneumonia. Subjects: Individuals responsible for ventilator-associated pneumonia surveillance at 43 U.S. hospitals. Interventions: None. Measurements and Main Results: We measured the proportion of standardized cases classified as ventilator-associated pneumonia. Of 138 hospitals consented, 61 partially completed the survey and 43 fully completed the survey (response rate 44% and 31%, respectively). Agreement among hospitals about classification of cases as ventilator-associated pneumonia/not ventilator-associated pneumonia was nearly random (Fleiss κ 0.13). Some hospitals rated 0% of cases as having pneumonia; others classified 100% as having pneumonia (median, 50%; interquartile range, 33–66%). Although region of the country did not predict case assignment, respondents who described their region as “rural” were more likely to judge a case to be pneumonia than respondents elsewhere (relative risk, 1.25, Kruskal-Wallis chi-square, p = 0.03). Conclusions: In this nationally representative study of hospitals, assignment of ventilator-associated pneumonia is extremely variable, enough to render comparisons between hospitals worthless, even when standardized cases eliminate variability in clinical data abstraction. The magnitude of this variability highlights the limitations of using poorly performing surveillance definitions as methods of hospital evaluation and comparison, and our study provides very strong support for moving to a more objective definition of ventilator-associated complications.
DOI: 10.1016/j.ccc.2011.09.003
2012
Cited 76 times
The Economics of Sepsis
Sepsis, severe sepsis, and septic shock impose a growing economic burden on health care systems globally. This article first describes the epidemiology of sepsis within the United States and internationally. It then reviews costs associated with sepsis and its management in the United States and internationally, including general cost sources in intensive care, direct costs of sepsis, and indirect costs of the burden of illness imposed by sepsis. Finally, it examines the cost-effectiveness of sepsis interventions, focusing on formal cost-effectiveness analyses of nosocomial sepsis prevention strategies, drotrecogin alfa (activated),and integrated sepsis protocols.
DOI: 10.1097/ccm.0000000000002284
2017
Cited 68 times
Randomized Clinical Trial of a Combination of an Inhaled Corticosteroid and Beta Agonist in Patients at Risk of Developing the Acute Respiratory Distress Syndrome*
Objectives: Effective pharmacologic treatments directly targeting lung injury in patients with the acute respiratory distress syndrome are lacking. Early treatment with inhaled corticosteroids and beta agonists may reduce progression to acute respiratory distress syndrome by reducing lung inflammation and enhancing alveolar fluid clearance. Design: Double-blind, randomized clinical trial (ClinicalTrials.gov: NCT01783821). The primary outcome was longitudinal change in oxygen saturation divided by the F io 2 (S/F) through day 5. We also analyzed categorical change in S/F by greater than 20%. Other outcomes included need for mechanical ventilation and development of acute respiratory distress syndrome. Setting: Five academic centers in the United States. Patients: Adult patients admitted through the emergency department at risk for acute respiratory distress syndrome. Interventions: Aerosolized budesonide/formoterol versus placebo bid for up to 5 days. Measurements and Main Results: Sixty-one patients were enrolled from September 3, 2013, to June 9, 2015. Median time from presentation to first study drug was less than 9 hours. More patients in the control group had shock at enrollment (14 vs 3 patients). The longitudinal increase in S/F was greater in the treatment group ( p = 0.02) and independent of shock ( p = 0.04). Categorical change in S/F improved ( p = 0.01) but not after adjustment for shock ( p = 0.15). More patients in the placebo group developed acute respiratory distress syndrome (7 vs 0) and required mechanical ventilation (53% vs 21%). Conclusions: Early treatment with inhaled budesonide/formoterol in patients at risk for acute respiratory distress syndrome is feasible and improved oxygenation as assessed by S/F. These results support further study to test the efficacy of inhaled corticosteroids and beta agonists for prevention of acute respiratory distress syndrome.
DOI: 10.1097/ccm.0b013e31828a3de5
2013
Cited 67 times
Pleural Pressure and Optimal Positive End-Expiratory Pressure Based on Esophageal Pressure Versus Chest Wall Elastance
1) To compare two published methods for estimating pleural pressure, one based on directly measured esophageal pressure and the other based on chest wall elastance. 2) To evaluate the agreement between two published positive end-expiratory pressure optimization strategies based on these methods, one targeting an end-expiratory esophageal pressure-based transpulmonary pressure of 0 cm H2O and the other targeting an end-inspiratory elastance-based transpulmonary pressure of 26 cm H2O.Retrospective study using clinical data.Medical and surgical ICUs.Sixty-four patients mechanically ventilated for acute respiratory failure with esophageal balloons placed for clinical management.Esophageal pressure and chest wall elastance-based methods for estimating pleural pressure and setting positive end-expiratory pressure were retrospectively applied to each of the 64 patients. In patients who were ventilated at two positive end-expiratory pressure levels, chest wall and respiratory system elastances were calculated at each positive end-expiratory pressure level.The pleural pressure estimates using both methods were discordant and differed by as much as 10 cm H2O for a given patient. The two positive end-expiratory pressure optimization strategies recommended positive end-expiratory pressure changes in opposite directions in 33% of patients. The ideal positive end-expiratory pressure levels recommended by the two methods for each patient were discordant and uncorrelated (R = 0.05). Chest wall and respiratory system elastances grew with increases in positive end-expiratory pressure in patients with positive end-expiratory esophageal pressure-based transpulmonary pressures (p < 0.05).Esophageal pressure and chest wall elastance-based methods for estimating pleural pressure do not yield similar results. The strategies of targeting an end-expiratory esophageal pressure-based transpulmonary pressure of 0 cm H2O and targeting an end-inspiratory elastance-based transpulmonary pressure of 26 cm H2O cannot be considered interchangeable. Finally, chest wall and respiratory system elastances may vary unpredictably with changes in positive end-expiratory pressure.
DOI: 10.1136/bmjqs-2017-007390
2018
Cited 64 times
Ethical decision-making climate in the ICU: theoretical framework and validation of a self-assessment tool
Literature depicts differences in ethical decision-making (EDM) between countries and intensive care units (ICU).To better conceptualise EDM climate in the ICU and to validate a tool to assess EDM climates.Using a modified Delphi method, we built a theoretical framework and a self-assessment instrument consisting of 35 statements. This Ethical Decision-Making Climate Questionnaire (EDMCQ) was developed to capture three EDM domains in healthcare: interdisciplinary collaboration and communication; leadership by physicians; and ethical environment. This instrument was subsequently validated among clinicians working in 68 adult ICUs in 13 European countries and the USA. Exploratory and confirmatory factor analysis was used to determine the structure of the EDM climate as perceived by clinicians. Measurement invariance was tested to make sure that variables used in the analysis were comparable constructs across different groups.Of 3610 nurses and 1137 physicians providing ICU bedside care, 2275 (63.1%) and 717 (62.9%) participated respectively. Statistical analyses revealed that a shortened 32-item version of the EDMCQ scale provides a factorial valid measurement of seven facets of the extent to which clinicians perceive an EDM climate: self-reflective and empowering leadership by physicians; practice and culture of open interdisciplinary reflection; culture of not avoiding end-of-life decisions; culture of mutual respect within the interdisciplinary team; active involvement of nurses in end-of-life care and decision-making; active decision-making by physicians; and practice and culture of ethical awareness. Measurement invariance of the EDMCQ across occupational groups was shown, reflecting that nurses and physicians interpret the EDMCQ items in a similar manner.The 32-item version of the EDMCQ might enrich the EDM climate measurement, clinicians' behaviour and the performance of healthcare organisations. This instrument offers opportunities to develop tailored ICU team interventions.
DOI: 10.1136/bmjqs-2017-007525
2018
Cited 61 times
Speaking up about care concerns in the ICU: patient and family experiences, attitudes and perceived barriers
Little is known about patient/family comfort voicing care concerns in real time, especially in the intensive care unit (ICU) where stakes are high and time is compressed. Experts advocate patient and family engagement in safety, which will require that patients/families be able to voice concerns. Data on patient/family attitudes and experiences regarding speaking up are sparse, and mostly include reporting events retrospectively, rather than pre-emptively, to try to prevent harm. We aimed to (1) assess patient/family comfort speaking up about common ICU concerns; (2) identify patient/family-perceived barriers to speaking up; and (3) explore factors associated with patient/family comfort speaking up.In collaboration with patients/families, we developed a survey to evaluate speaking up attitudes and behaviours. We surveyed current ICU families in person at an urban US academic medical centre, supplemented with a larger national internet sample of individuals with prior ICU experience.105/125 (84%) of current families and 1050 internet panel participants with ICU history completed the surveys. Among the current ICU families, 50%-70% expressed hesitancy to voice concerns about possible mistakes, mismatched care goals, confusing/conflicting information and inadequate hand hygiene. Results among prior ICU participants were similar. Half of all respondents reported at least one barrier to voicing concerns, most commonly not wanting to be a 'troublemaker', 'team is too busy' or 'I don't know how'. Older, female participants and those with personal or family employment in healthcare were more likely to report comfort speaking up.Speaking up may be challenging for ICU patients/families. Patient/family education about how to speak up and assurance that raising concerns will not create 'trouble' may help promote open discussions about care concerns and possible errors in the ICU.
DOI: 10.1097/shk.0000000000000386
2015
Cited 58 times
Kinetics and Role of Plasma Matrix Metalloproteinase-9 Expression in Acute Lung Injury and the Acute Respiratory Distress Syndrome
Primed neutrophils that are capable of releasing matrix metalloproteinases (MMPs) into the circulation are thought to play a significant role in the pathophysiology of acute respiratory distress syndrome (ARDS). We hypothesized that direct measurement of plasma MMP-9 activity may be a predictor of incipient tissue damage and subsequent lung injury, which was investigated in both an animal model of ARDS and a small cohort of 38 critically ill human patients. In a mouse model of ARDS involving instillation of intratracheal lipopolysaccharide (LPS) to induce lung inflammation, we measured neutrophil-mediated inflammation, along with MMP-9 activity in the airways and lung tissue and MMP-9 expression in the plasma. Neutrophil recruitment, inflammation, and MMP-9 activity in the airways and lung tissue increased throughout the 72 h after LPS instillation, whereas plasma MMP-9 expression was greatest at 12 to 24 h after LPS instillation. The results suggest that the peak in plasma MMP-9 activity may precede the peak of neutrophil inflammation in the airways and lung tissue in the setting of ARDS. Based on this animal study, a retrospective observational cohort study involving 38 patients admitted to a surgical intensive care unit at a tertiary care university hospital with acute respiratory failure requiring intubation and mechanical ventilation was conducted. Plasma samples were collected daily, and MMP-9 activity was compared with lung function as determined by the PaO2/FiO2 ratio. In patients who developed ARDS, a notable increase in plasma MMP-9 activity on a particular day correlated with a decrease in the PaO2/FiO2 ratio on the following day (r = -0.503, P < 0.006). Taken together, these results suggest that plasma MMP-9 activity changes, as a surrogate for primed neutrophils may have predictive value for the development of ARDS in a selected subset of critically ill patients.
DOI: 10.1161/hypertensionaha.116.07252
2016
Cited 57 times
Circulating Antiangiogenic Factors and Myocardial Dysfunction in Hypertensive Disorders of Pregnancy
Hypertensive disorders of pregnancy (HDP) are associated with subclinical changes in cardiac function. Although the mechanism underlying this finding is unknown, elevated levels of soluble antiangiogenic proteins such as soluble fms-like tyrosine kinase-1 (sFlt1) and soluble endoglin (sEng) are associated with myocardial dysfunction and may play a role. We hypothesized that these antiangiogenic proteins may contribute to the development of cardiac dysfunction in HDP. We prospectively studied 207 pregnant women with HDP and nonhypertensive controls and evaluated whether changes in global longitudinal strain (GLS) observed on echocardiography is specific for HDP and whether these changes correlate with HDP biomarkers, sFlt1 and sEng. A total of 62 (30%) patients were diagnosed with preeclampsia (group A), 105 (51%) did not have an HDP (group B), and 40 (19%) were diagnosed with chronic or gestational hypertension (group C). Blood was drawn and sFlt1 and sEng levels measured using enzyme-linked immunosorbent assay. Comprehensive echocardiograms, including measurement of GLS, were performed on all patients. Overall, GLS was worse in women in group A (preeclampsia) than those in group B or C. Increasing sFlt1 and sEng levels correlated with worsening GLS (r=0.44 for sFlt1 and r=0.46 for sEng, both P<0.001), which remained significant after multivariable analysis (r=0.18 and r=0.22, both P≤0.01). Increasing levels also correlated with increasing left ventricular mass index, which also remained significant after multivariable analysis (r=0.20 for sFlt1 and 0.19 for sEng, both P=0.01). Elevated circulating levels of antiangiogenic proteins in HDP correlate with and may contribute to myocardial dysfunction as measured by GLS.
DOI: 10.1513/annalsats.201608-629ot
2017
Cited 54 times
Design and Rationale of the Reevaluation of Systemic Early Neuromuscular Blockade Trial for Acute Respiratory Distress Syndrome
The Reevaluation of Systemic Early Neuromuscular Blockade (ROSE) trial is a multicenter, randomized trial designed to assess the efficacy and safety of early neuromuscular blockade in patients with moderate to severe acute respiratory distress syndrome.This document provides background for interpretation of the trial results, and highlights unique design approaches that may inform future trials of acute illness.We describe the process by which ROSE was chosen as the inaugural trial of the multidisciplinary Prevention and Early Treatment of Acute Lung Injury Network, provide the trial methodology using the Consolidated Standards of Reporting Trials framework, and discuss key design challenges and their resolution.Four key design issues proved challenging-feasibility, choice of sedation depth in the control group, impact of emphasizing early treatment on enrollment criteria and protocol execution, and choice of positive end-expiratory pressure strategy.We used literature, an iterative consensus model, and internal surveys of current practice to inform design choice.ROSE will provide definitive, Consolidated Standards of Reporting Trials adherent data on early neuromuscular blockade for future patients with acute respiratory distress syndrome.Our multidisciplinary approach to trial design may be of use to other trials of acute illness.Clinical trial registered with www.clinicaltrials.gov(NCT02509078).
DOI: 10.1164/rccm.201708-1676cp
2018
Cited 50 times
The Practice of Respect in the ICU
Although "respect" and "dignity" are intuitive concepts, little formal work has addressed their systematic application in the ICU setting. After convening a multidisciplinary group of relevant experts, we undertook a review of relevant literature and collaborative discussions focused on the practice of respect in the ICU. We report the output of this process, including a summary of current knowledge, a conceptual framework, and a research program for understanding and improving the practice of respect and dignity in the ICU. We separate our report into findings and proposals. Findings include the following: 1) dignity and respect are interrelated; 2) ICU patients and families are vulnerable to disrespect; 3) violations of respect and dignity appear to be common in the ICU and overlap substantially with dehumanization; 4) disrespect may be associated with both primary and secondary harms; and 5) systemic barriers complicate understanding and the reliable practice of respect in the ICU. Proposals include: 1) initiating and/or expanding a field of research on the practice of respect in the ICU; 2) treating "failures of respect" as analogous to patient safety events and using existing quality and safety mechanisms for improvement; and 3) identifying both benefits and potential unintended consequences of efforts to improve the practice of respect. Respect and dignity are important considerations in the ICU, even as substantial additional research remains to be done.
DOI: 10.1002/rth2.12357
2020
Cited 41 times
Fibrinolytic therapy for refractory COVID‐19 acute respiratory distress syndrome: Scientific rationale and review
<h2>Abstract</h2> The coronavirus disease 2019 (COVID‐19) pandemic has caused respiratory failure and associated mortality in numbers that have overwhelmed global health systems. Thrombotic coagulopathy is present in nearly three quarters of patients with COVID‐19 admitted to the intensive care unit, and both the clinical picture and pathologic findings are consistent with microvascular occlusive phenomena being a major contributor to their unique form of respiratory failure. Numerous studies are ongoing focusing on anticytokine therapies, antibiotics, and antiviral agents, but none to date have focused on treating the underlying thrombotic coagulopathy in an effort to improve respiratory failure in COVID‐19. There are animal data and a previous human trial demonstrating a survival advantage with fibrinolytic therapy to treat acute respiratory distress syndrome. Here, we review the extant and emerging literature on the relationship between thrombotic coagulopathy and pulmonary failure in the context of COVID‐19 and present the scientific rationale for consideration of targeting the coagulation and fibrinolytic systems to improve pulmonary function in these patients.
DOI: 10.1097/ccm.0000000000004951
2021
Cited 29 times
Optimal Sedation in Patients Who Receive Neuromuscular Blocking Agent Infusions for Treatment of Acute Respiratory Distress Syndrome—A Retrospective Cohort Study From a New England Health Care Network*
Two previously published trials (ARDS et Curarisation Systematique [ACURASYS] and Reevaluation of Systemic Early Neuromuscular Blockade [ROSE]) presented equivocal evidence on the effect of neuromuscular blocking agent infusions in patients with acute respiratory distress syndrome (acute respiratory distress syndrome). The sedation regimen differed between these trials and also within the ROSE trial between treatment and control groups. We hypothesized that the proportion of deeper sedation is a mediator of the effect of neuromuscular blocking agent infusions on mortality.Retrospective cohort study.Seven ICUs in an academic hospital network, Beth Israel Deaconess Medical Center (Boston, MA).Intubated and mechanically ventilated ICU patients with acute respiratory distress syndrome (Berlin definition) admitted between January 2008 until June 2019.None.The proportion of deeper sedation was defined as days with nonlight sedation as a fraction of mechanical ventilation days in the ICU after acute respiratory distress syndrome diagnosis. Using clinical data obtained from a hospital network registry, 3,419 patients with acute respiratory distress syndrome were included, of whom 577 (16.9%) were treated with neuromuscular blocking agent infusions, for a mean (sd) duration of 1.8 (±1.9) days. The duration of deeper sedation was prolonged in patients receiving neuromuscular blocking agent infusions (4.6 ± 2.2 d) compared with patients without neuromuscular blocking agent infusions (2.4 ± 2.2 d; p < 0.001). The proportion of deeper sedation completely mediated the negative effect of neuromuscular blocking agent infusions on in-hospital mortality (p < 0.001). Exploratory analysis in patients who received deeper sedation revealed a beneficial effect of neuromuscular blocking agent infusions on mortality (49% vs 51%; adjusted odds ratio, 0.80; 95% CI, 0.63-0.99, adjusted absolute risk difference, -0.05; p = 0.048).In acute respiratory distress syndrome patients who receive neuromuscular blocking agent infusions, a prolonged, high proportion of deeper sedation is associated with increased mortality. Our data support the view that clinicians should minimize the duration of deeper sedation after recovery from neuromuscular blocking agent infusion.
DOI: 10.1007/s00134-022-06809-8
2022
Cited 17 times
Imaging the acute respiratory distress syndrome: past, present and future
In patients with the acute respiratory distress syndrome (ARDS), lung imaging is a fundamental tool in the study of the morphological and mechanistic features of the lungs. Chest computed tomography studies led to major advances in the understanding of ARDS physiology. They allowed the in vivo study of the syndrome's lung features in relation with its impact on respiratory physiology and physiology, but also explored the lungs' response to mechanical ventilation, be it alveolar recruitment or ventilator-induced lung injuries. Coupled with positron emission tomography, morphological findings were put in relation with ventilation, perfusion or acute lung inflammation. Lung imaging has always been central in the care of patients with ARDS, with modern point-of-care tools such as electrical impedance tomography or lung ultrasounds guiding clinical reasoning beyond macro-respiratory mechanics. Finally, artificial intelligence and machine learning now assist imaging post-processing software, which allows real-time analysis of quantitative parameters that describe the syndrome's complexity. This narrative review aims to draw a didactic and comprehensive picture of how modern imaging techniques improved our understanding of the syndrome, and have the potential to help the clinician guide ventilatory treatment and refine patient prognostication.
DOI: 10.1213/00000539-199808000-00006
1998
Cited 91 times
Ketamine Attenuates the Interleukin-6 Response After Cardiopulmonary Bypass
Cardiopulmonary bypass (CPB) has been proposed as a model for studying the inflammatory cascade associated with the systemic inflammatory response syndrome.Serum interleukin-6 (IL-6) concentration seems to be a good indicator of activation of the inflammatory cascade and predictor of subsequent organ dysfunction and death. Prolonged increases of circulating IL-6 are associated with morbidity and mortality after cardiac operations. In the present study, we compared the effects of adding ketamine 0.25 mg/kg to general anesthesia on serum IL-6 levels during and after elective coronary artery bypass grafting (CABG). Thirty-one patients undergoing elective CABG were randomized to one of two groups and prospectively studied in a double-blind manner. The patients received either ketamine 0.25 mg/kg or a similar volume of isotonic sodium chloride solution in addition to large-dose fentanyl anesthesia. Blood samples for analysis of serum IL-6 levels were drawn before the operation; after CPB; 4, 24, and 48 h after surgery; and daily for 6 days beginning the third day postoperatively. Ketamine suppressed the serum IL-6 response immediately after CPB and 4, 24, and 48 h postoperatively (P < 0.05). During the first 7 days after surgery, the serum IL-6 levels in the ketamine group were significantly lower than those in the control group (P < 0.05). On Day 8 after surgery, IL-6 levels were no different from baseline values in both groups. A single dose of ketamine 0.25 mg/kg administered before CPB suppresses the increase of serum IL-6 during and after CABG. Implications: In this randomized, double-blind, prospective study of patients during and after coronary artery bypass surgery, we examined whether small-dose ketamine added to general anesthesia before cardiopulmonary bypass suppresses the increase of the serum interleukin-6 (IL-6) concentration. Serum IL-6 levels correlate with the patient's clinical course during and after coronary artery bypass. Ketamine suppresses the increase of serum IL-6 during and after coronary artery bypass surgery. (Anesth Analg 1998;87:266-71)
DOI: 10.1097/01.ccm.0000251508.12555.3e
2007
Cited 84 times
Mortality in Emergency Department Sepsis (MEDS) score predicts 1-year mortality*
To assess the predictive performance for 1-yr mortality of the previously derived and validated Mortality in Emergency Department Sepsis (MEDS) score.Prospective cohort study.Consecutive adult (aged > or =18 yrs) emergency department patients presenting to an urban, tertiary care, university hospital were eligible if they had a clinically suspected infection as indicated by the decision to obtain a blood culture. The enrollment period was between February 1, 2000, and February 1, 2001. Of 3,926 eligible patient visits, 3,762 (96%) were enrolled and 3,102 unique first visits were analyzed.None.A total of 667 patients (21.5%) died within 1 yr. The unadjusted 1-yr mortality rates for the MEDS risk groups were: very low risk, 7%; low risk, 20%; moderate risk, 37%; high risk, 64%; very high risk, 80%. Using a Cox proportional hazard model that controlled for age, sex, and Charlson co-morbidity index, the 1-yr hazard ratios compared with the baseline very low-risk group were: low risk, 2.2 (1.7-2.9); moderate risk, 3.5 (2.7-4.6); high risk, 6.7 (4.9-9.3); and very high risk, 10.5 (7.2-15.4). The groups were significantly different (p < .0001).Although the score was initially derived for 28-day in-hospital mortality, our results indicate that the MEDS score also predicts patient survival at 1 yr after index hospital visit with suspected infection. The score needs external validation before widespread use.
DOI: 10.1197/j.aem.2007.02.036
2007
Cited 79 times
Performance of Severity of Illness Scoring Systems in Emergency Department Patients with Infection
Objectives To validate the Mortality in Emergency Department Sepsis (MEDS) score, the Confusion, Urea nitrogen, Respiratory rate, Blood pressure, 65 years of age and older (CURB‐65) score, and a modified Rapid Emergency Medicine Score (mREMS) in patients with suspected infection. Methods This was a prospective cohort study. Adult patients with clinically suspected infection admitted from December 10, 2003, to September 30, 2004, in an urban emergency department with approximately 50,000 annual visits were eligible. The MEDS and CURB‐65 scores were calculated as originally described, but REMS was modified in neurologic scoring because a full Glasgow Coma Scale score was not uniformly available. Discrimination of each score was assessed with the area under the receiver operating characteristics curve (AUC). Results Of 2,132 patients, 3.9% (95% confidence interval [CI] = 3.1% to 4.7%) died. Mortality stratified by the MEDS score was as follows: 0–4 points, 0.4% (95% CI = 0.0 to 0.7%); 5–7 points, 3.3% (95% CI = 1.7% to 4.9%); 8–12 points, 6.6% (95% CI = 4.4% to 8.8%); and ≥13 points, 31.6% (95% CI = 22.4% to 40.8%). Mortality stratified by CURB‐65 was as follows: 0 points, 0% (0 of 457 patients); 1 point, 1.6% (95% CI = 0.6% to 2.6%); 2 points, 4.1% (95% CI = 2.3% to 6.0%); 3 points, 4.9% (95% CI = 2.8% to 6.9%); 4 points, 18.1% (95% CI = 11.9% to 24.3%); and 5 points, 28.0% (95% CI = 10.4% to 45.6%). Mortality stratified by the mREMS was as follows: 0–2 points, 0.6% (95% CI = 0 to 1.2%); 3–5 points, 2.0% (95% CI = 0.8% to 3.1%); 6–8 points, 2.3% (95% CI = 1.1% to 3.5%); 9–11 points, 7.1% (95% CI = 4.2% to 10.1%); 12–14 points, 20.0% (95% CI = 12.5% to 27.5%); and ≥15 points, 40.0% (95% CI = 22.5% to 57.5%). The AUCs were 0.85, 0.80, and 0.79 for MEDS, mREMS, and CURB‐65, respectively. Conclusions In this large cohort of patients with clinically suspected infection, MEDS, mREMS, and CURB‐65 all correlated well with 28‐day in‐hospital mortality.
DOI: 10.1378/chest.07-2691
2008
Cited 72 times
Definitive Care for the Critically Ill During a Disaster: Medical Resources for Surge Capacity
Mass numbers of critically ill disaster victims will stress the abilities of health-care systems to maintain usual critical care services for all in need. To enhance the number of patients who can receive life-sustaining interventions, the Task Force on Mass Critical Care (hereafter termed the Task Force) has suggested a framework for providing limited, essential critical care, termed emergency mass critical care (EMCC). This article suggests medical equipment, concepts to expand treatment spaces, and staffing models for EMCC.Consensus suggestions for EMCC were derived from published clinical practice guidelines and medical resource utilization data for the everyday critical care conditions that are anticipated to predominate during mass critical care events. When necessary, expert opinion was used. TASK FORCE MAJOR SUGGESTIONS: The Task Force makes the following suggestions: (1) one mechanical ventilator that meets specific characteristics, as well as a set of consumable and durable medical equipment, should be provided for each EMCC patient; (2) EMCC should be provided in hospitals or similarly equipped structures; after ICUs, postanesthesia care units, and emergency departments all reach capacity, hospital locations should be repurposed for EMCC in the following order: (A) step-down units and large procedure suites, (B) telemetry units, and (C) hospital wards; and (3) hospitals can extend the provision of critical care using non-critical care personnel via a deliberate model of delegation to match staff competencies with patient needs.By using the Task Force suggestions for adequate supplies of medical equipment, appropriate treatment space, and trained staff, communities may better prepare to deliver augmented essential critical care in response to disasters.
DOI: 10.1097/ccm.0b013e31829e4dc5
2013
Cited 58 times
Focused Critical Care Echocardiography
Objective: Portable ultrasound is now used routinely in many ICUs for various clinical applications. Echocardiography performed by noncardiologists, both transesophageal and transthoracic, has evolved to broad applications in diagnosis, monitoring, and management of critically ill patients. This review provides a current update on focused critical care echocardiography for the management of critically ill patients. Method: Source data were obtained from a PubMed search of the medical literature, including the PubMed “related articles” search methodology. Summary and Conclusions: Although studies demonstrating improved clinical outcomes for critically ill patients managed by focused critical care echocardiography are generally lacking, there is evidence to suggest that some intermediate outcomes are improved. Furthermore, noncardiologists can learn focused critical care echocardiography and adequately interpret the information obtained. Noncardiologists can also successfully incorporate focused critical care echocardiography into advanced cardiopulmonary life support. Formal training and proctoring are important for safe application of focused critical care echocardiography in clinical practice. Further outcomes-based research is urgently needed to evaluate the efficacy of focused critical care echocardiography.
DOI: 10.1136/bmjopen-2012-001606
2012
Cited 54 times
Lung Injury Prevention with Aspirin (LIPS-A): a protocol for a multicentre randomised clinical trial in medical patients at high risk of acute lung injury
<h3>Introduction</h3> Acute lung injury (ALI) is a devastating condition that places a heavy burden on public health resources. Although the need for effective ALI prevention strategies is increasingly recognised, no effective preventative strategies exist. The Lung Injury Prevention Study with Aspirin (LIPS-A) aims to test whether aspirin (ASA) could prevent and/or mitigate the development of ALI. <h3>Methods and analysis</h3> LIPS-A is a multicentre, double-blind, randomised clinical trial testing the hypothesis that the early administration of ASA will result in a reduced incidence of ALI in adult patients at high risk. This investigation will enrol 400 study participants from 14 hospitals across the USA. Conditional logistic regression will be used to test the primary hypothesis that early ASA administration will decrease the incidence of ALI. <h3>Ethics and dissemination</h3> Safety oversight will be under the direction of an independent Data and Safety Monitoring Board (DSMB). Approval of the protocol was obtained from the DSMB prior to enrolling the first study participant. Approval of both the protocol and informed consent documents were also obtained from the institutional review board of each participating institution prior to enrolling study participants at the respective site. In addition to providing important clinical and mechanistic information, this investigation will inform the scientific merit and feasibility of a phase III trial on ASA as an ALI prevention agent. The findings of this investigation, as well as associated ancillary studies, will be disseminated in the form of oral and abstract presentations at major national and international medical specialty meetings. The primary objective and other significant findings will also be presented in manuscript form. All final, published manuscripts resulting from this protocol will be submitted to Pub Med Central in accordance with the National Institute of Health Public Access Policy.
DOI: 10.1378/chest.13-2255
2014
Cited 53 times
Automated Surveillance for Ventilator-Associated Events
The US Centers for Disease Control and Prevention has implemented a new, multitiered definition for ventilator-associated events (VAEs) to replace their former definition of ventilator-associated pneumonia (VAP). We hypothesized that the new definition could be implemented in an automated, efficient, and reliable manner using the electronic health record and that the new definition would identify different patients than those identified under the previous definition.We conducted a retrospective cohort analysis using an automated algorithm to analyze all patients admitted to the ICU at a single urban, tertiary-care hospital from 2008 to 2013.We identified 26,466 consecutive admissions to the ICU, 10,998 (42%) of whom were mechanically ventilated and 675 (3%) of whom were identified as having any VAE. Any VAE was associated with an adjusted increased risk of death (OR, 1.91; 95% CI, 1.53-2.37; P < .0001). The automated algorithm was reliable (sensitivity of 93.5%, 95% CI, 77.2%-98.8%; specificity of 100%, 95% CI, 98.8%-100% vs a human abstractor). Comparison of patients with a VAE and with the former VAP definition yielded little agreement (κ = 0.06).A fully automated method of identifying VAEs is efficient and reliable within a single institution. Although VAEs are strongly associated with worse patient outcomes, additional research is required to evaluate whether and which interventions can successfully prevent VAEs.
DOI: 10.1136/bmjopen-2014-006356
2014
Cited 52 times
The Esophageal Pressure-Guided Ventilation 2 (EPVent2) trial protocol: a multicentre, randomised clinical trial of mechanical ventilation guided by transpulmonary pressure
Introduction Optimal ventilator management for patients with acute respiratory distress syndrome (ARDS) remains uncertain. Lower tidal volume ventilation appears to be beneficial, but optimal management of positive end-expiratory pressure (PEEP) remains unclear. The Esophageal Pressure-Guided Ventilation 2 Trial (EPVent2) aims to examine the impact of mechanical ventilation directed at maintaining a positive transpulmonary pressure (P TP ) in patients with moderate-to-severe ARDS. Methods and analysis EPVent2 is a multicentre, prospective, randomised, phase II clinical trial testing the hypothesis that the use of a P TP -guided ventilation strategy will lead to improvement in composite outcomes of mortality and time off the ventilator at 28 days as compared with a high-PEEP control. This study will enrol 200 study participants from 11 hospitals across North America. The trial will utilise a primary composite end point that incorporates death and days off the ventilator at 28 days to test the primary hypothesis that adjusting ventilator pressure to achieve positive P TP values will result in improved mortality and ventilator-free days. Ethics and dissemination Safety oversight will be under the direction of an independent Data and Safety Monitoring Board (DSMB). Approval of the protocol was obtained from the DSMB prior to enrolling the first study participant. Approvals of the protocol as well as informed consent documents were also obtained from the Institutional Review Board of each participating institution prior to enrolling study participants at each respective site. The findings of this investigation, as well as associated ancillary studies, will be disseminated in the form of oral and abstract presentations at major national and international medical specialty meetings. The primary objective and other significant findings will also be presented in manuscript form. All final, published manuscripts resulting from this protocol will be submitted to PubMed Central in accordance with the National Institute of Health Public Access Policy. Trial registration number ClinicalTrials.gov under number NCT01681225 .
DOI: 10.1213/ane.0000000000000943
2015
Cited 50 times
Detection of Myocardial Dysfunction in Septic Shock
In Brief BACKGROUND: Patients with septic shock are at increased risk of myocardial dysfunction. However, the left ventricular ejection fraction (EF) typically remains preserved in septic shock. Strain measurement using speckle-tracking echocardiography may quantify abnormalities in myocardial function not detected by conventional echocardiography. To investigate whether septic shock results in greater strain changes than sepsis alone, we evaluated strain in patients with sepsis and septic shock. METHODS: We prospectively identified 35 patients with septic shock and 15 with sepsis. These patients underwent serial transthoracic echocardiograms at enrollment and 24 hours later. Measurements included longitudinal, radial, and circumferential strain in addition to standard echocardiographic assessments of left ventricular function. RESULTS: Longitudinal strain worsened significantly over 24 hours in patients with septic shock (P < 0.0001) but did not change in patients with sepsis alone (P = 0.43). No significant changes in radial or circumferential strain or EF were observed in either group over the 24-hour measurement period. In patients with septic shock, the significant worsening in longitudinal strain persisted after adjustment for left ventricular end-diastolic volume and vasopressor use (P < 0.0001). In patients with sepsis, adjustment for left ventricular end-diastolic volume and vasopressor use did not alter the finding of no significant differences in longitudinal strain (P = 0.48) or EF (P = 0.96). CONCLUSIONS: In patients with septic shock, but not sepsis, myocardial strain imaging using speckle-tracking echocardiography identified myocardial dysfunction in the absence of changes in EF. These data suggest that strain imaging may play a role in cardiovascular assessment during septic shock. Published ahead of print September 22, 2015
DOI: 10.1371/journal.pone.0155858
2016
Cited 44 times
Predicting Mortality in Low-Income Country ICUs: The Rwanda Mortality Probability Model (R-MPM)
Intensive Care Unit (ICU) risk prediction models are used to compare outcomes for quality improvement initiatives, benchmarking, and research. While such models provide robust tools in high-income countries, an ICU risk prediction model has not been validated in a low-income country where ICU population characteristics are different from those in high-income countries, and where laboratory-based patient data are often unavailable. We sought to validate the Mortality Probability Admission Model, version III (MPM0-III) in two public ICUs in Rwanda and to develop a new Rwanda Mortality Probability Model (R-MPM) for use in low-income countries.We prospectively collected data on all adult patients admitted to Rwanda's two public ICUs between August 19, 2013 and October 6, 2014. We described demographic and presenting characteristics and outcomes. We assessed the discrimination and calibration of the MPM0-III model. Using stepwise selection, we developed a new logistic model for risk prediction, the R-MPM, and used bootstrapping techniques to test for optimism in the model.Among 427 consecutive adults, the median age was 34 (IQR 25-47) years and mortality was 48.7%. Mechanical ventilation was initiated for 85.3%, and 41.9% received vasopressors. The MPM0-III predicted mortality with area under the receiver operating characteristic curve of 0.72 and Hosmer-Lemeshow chi-square statistic p = 0.024. We developed a new model using five variables: age, suspected or confirmed infection within 24 hours of ICU admission, hypotension or shock as a reason for ICU admission, Glasgow Coma Scale score at ICU admission, and heart rate at ICU admission. Using these five variables, the R-MPM predicted outcomes with area under the ROC curve of 0.81 with 95% confidence interval of (0.77, 0.86), and Hosmer-Lemeshow chi-square statistic p = 0.154.The MPM0-III has modest ability to predict mortality in a population of Rwandan ICU patients. The R-MPM is an alternative risk prediction model with fewer variables and better predictive power. If validated in other critically ill patients in a broad range of settings, the model has the potential to improve the reliability of comparisons used for critical care research and quality improvement initiatives in low-income countries.
DOI: 10.1164/rccm.201712-2530oc
2018
Cited 41 times
Early Intravascular Events Are Associated with Development of Acute Respiratory Distress Syndrome. A Substudy of the LIPS-A Clinical Trial
Rationale: Acute respiratory distress syndrome (ARDS) is a devastating illness with limited therapeutic options.A better understanding of early biochemical and immunological events in ARDS could inform the development of new preventive and treatment strategies.Objectives: To determine select peripheral blood lipid mediator and leukocyte responses in patients at risk for ARDS.Methods: Patients at risk for ARDS were randomized as part of a multicenter, double-blind clinical trial of aspirin versus placebo (the LIPS-A [Lung Injury Prevention Study with Aspirin] trial; NCT01504867).Plasma thromboxane B 2 (TXB 2 ), aspirin-triggered lipoxin A 4 (15-epi-LXA 4 , ATL), and peripheral blood leukocyte number and activation were determined on enrollment and after treatment with either aspirin or placebo.Measurements and Main Results: Thirty-three of 367 subjects (9.0%) developed ARDS after randomization.Baseline ATL levels, total monocyte counts, intermediate monocyte counts, and monocyte-platelet aggregates were associated with the development of ARDS.Peripheral blood neutrophil count and monocyte-platelet aggregates significantly decreased over time.Of note, nine subjects developed ARDS after randomization yet before study drug initiation, including seven subjects assigned to aspirin treatment.Subjects without ARDS at the time of first dose demonstrated a lower incidence of ARDS with aspirin treatment.Compared with placebo, aspirin significantly decreased TXB 2 and increased the ATL/TXB 2 ratio.Conclusions: Biomarkers of intravascular monocyte activation in at-risk patients were associated with development of ARDS.The potential clinical benefit of early aspirin for prevention of ARDS remains uncertain.Together, results of the biochemical and immunological analyses provide a window into the early pathogenesis of human ARDS and represent potential vascular biomarkers of ARDS risk.Clinical trial registered with www.clinicaltrials.gov(NCT01504867).
DOI: 10.1097/ccm.0000000000002451
2017
Cited 40 times
Ultrasound as a Screening Tool for Central Venous Catheter Positioning and Exclusion of Pneumothorax*
Although real-time ultrasound guidance during central venous catheter insertion has become a standard of care, postinsertion chest radiograph remains the gold standard to confirm central venous catheter tip position and rule out associated lung complications like pneumothorax. We hypothesize that a combination of transthoracic echocardiography and lung ultrasound is noninferior to chest radiograph when used to accurately assess central venous catheter positioning and screen for pneumothorax.All operating rooms and surgical and trauma ICUs at the institution.Single-center, prospective noninferiority study.Patients receiving ultrasound-guided subclavian or internal jugular central venous catheters.During ultrasound-guided central venous catheter placement, correct positioning of central venous catheter was accomplished by real-time visualization of the guide wire and positive right atrial swirl sign using the subcostal four-chamber view. After insertion, pneumothorax was ruled out by the presence of lung sliding and seashore sign on M-mode.Data analysis was done for 137 patients. Chest radiograph ruled out pneumothorax in 137 of 137 patients (100%). Lung ultrasound was performed in 123 of 137 patients and successfully screened for pneumothorax in 123 of 123 (100%). Chest radiograph approximated accurate catheter tip position in 136 of 137 patients (99.3%). Adequate subcostal four-chamber views could not be obtained in 13 patients. Accurate positioning of central venous catheter with ultrasound was then confirmed in 121 of 124 patients (97.6%) as described previously.Transthoracic echocardiography and lung ultrasound are noninferior to chest x-ray for screening of pneumothorax and accurate central venous catheter positioning. Thus, the point of care use of ultrasound can reduce central venous catheter insertion to use time, exposure to radiation, and improve patient safety.
DOI: 10.1097/ta.0000000000002786
2020
Cited 33 times
Rescue therapy for severe COVID-19–associated acute respiratory distress syndrome with tissue plasminogen activator: A case series
ABSTRACT The coronavirus disease 2019 (COVID-19) pandemic has led to unprecedented stresses on modern medical systems, overwhelming the resource infrastructure in numerous countries while presenting a unique series of pathophysiologic clinical findings. Thrombotic coagulopathy is common in critically ill patients suffering from COVID-19, with associated high rates of respiratory failure requiring prolonged periods of mechanical ventilation. Here, we report a case series of five patients suffering from profound, medically refractory COVID-19–associated respiratory failure who were treated with fibrinolytic therapy using tissue plasminogen activator (tPA; alteplase). All five patients appeared to have an improved respiratory status following tPA administration: one patient had an initial marked improvement that partially regressed after several hours, one patient had transient improvements that were not sustained, and three patients had sustained clinical improvements following tPA administration. LEVEL OF EVIDENCE Therapeutic, Level V.
DOI: 10.1097/aln.0000000000003650
2020
Cited 32 times
Intraoperative Oxygen Concentration and Neurocognition after Cardiac Surgery
Background Despite evidence suggesting detrimental effects of perioperative hyperoxia, hyperoxygenation remains commonplace in cardiac surgery. Hyperoxygenation may increase oxidative damage and neuronal injury leading to potential differences in postoperative neurocognition. Therefore, this study tested the primary hypothesis that intraoperative normoxia, as compared to hyperoxia, reduces postoperative cognitive dysfunction in older patients having cardiac surgery. Methods A randomized double-blind trial was conducted in patients aged 65 yr or older having coronary artery bypass graft surgery with cardiopulmonary bypass. A total of 100 patients were randomized to one of two intraoperative oxygen delivery strategies. Normoxic patients (n = 50) received a minimum fraction of inspired oxygen of 0.35 to maintain a Pao2 above 70 mmHg before and after cardiopulmonary bypass and between 100 and 150 mmHg during cardiopulmonary bypass. Hyperoxic patients (n = 50) received a fraction of inspired oxygen of 1.0 throughout surgery, irrespective of Pao2 levels. The primary outcome was neurocognitive function measured on postoperative day 2 using the Telephonic Montreal Cognitive Assessment. Secondary outcomes included neurocognitive function at 1, 3, and 6 months, as well as postoperative delirium, mortality, and durations of mechanical ventilation, intensive care unit stay, and hospital stay. Results The median age was 71 yr (interquartile range, 68 to 75), and the median baseline neurocognitive score was 17 (16 to 19). The median intraoperative Pao2 was 309 (285 to 352) mmHg in the hyperoxia group and 153 (133 to 168) mmHg in the normoxia group (P &amp;lt; 0.001). The median Telephonic Montreal Cognitive Assessment score on postoperative day 2 was 18 (16 to 20) in the hyperoxia group and 18 (14 to 20) in the normoxia group (P = 0.42). Neurocognitive function at 1, 3, and 6 months, as well as secondary outcomes, were not statistically different between groups. Conclusions In this randomized controlled trial, intraoperative normoxia did not reduce postoperative cognitive dysfunction when compared to intraoperative hyperoxia in older patients having cardiac surgery. Although the optimal intraoperative oxygenation strategy remains uncertain, the results indicate that intraoperative hyperoxia does not worsen postoperative cognition after cardiac surgery. Editor’s Perspective What We Already Know about This Topic What This Article Tells Us That Is New