Refine
Has Fulltext
- yes (55)
Is part of the Bibliography
- yes (55)
Document Type
- Journal article (55)
Language
- English (55) (remove)
Keywords
- COVID-19 (6)
- blood-brain barrier (4)
- critical care (4)
- acute respiratory distress syndrome (3)
- breast cancer (3)
- iron deficiency (3)
- patient blood management (3)
- ARDS (2)
- SARS-CoV-2 (2)
- acute kidney injury (2)
Institute
- Klinik und Poliklinik für Anästhesiologie (ab 2004) (55)
- Medizinische Klinik und Poliklinik I (6)
- Frauenklinik und Poliklinik (5)
- Institut für Klinische Epidemiologie und Biometrie (5)
- Neurochirurgische Klinik und Poliklinik (3)
- Institut für Hygiene und Mikrobiologie (2)
- Institut für Informatik (2)
- Klinik und Poliklinik für Allgemein-, Viszeral-, Gefäß- und Kinderchirurgie (Chirurgische Klinik I) (2)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (2)
- Medizinische Klinik und Poliklinik II (2)
Sonstige beteiligte Institutionen
Background
Intensive Care Resources are heavily utilized during the COVID-19 pandemic. However, risk stratification and prediction of SARS-CoV-2 patient clinical outcomes upon ICU admission remain inadequate. This study aimed to develop a machine learning model, based on retrospective & prospective clinical data, to stratify patient risk and predict ICU survival and outcomes.
Methods
A Germany-wide electronic registry was established to pseudonymously collect admission, therapeutic and discharge information of SARS-CoV-2 ICU patients retrospectively and prospectively. Machine learning approaches were evaluated for the accuracy and interpretability of predictions. The Explainable Boosting Machine approach was selected as the most suitable method. Individual, non-linear shape functions for predictive parameters and parameter interactions are reported.
Results
1039 patients were included in the Explainable Boosting Machine model, 596 patients retrospectively collected, and 443 patients prospectively collected. The model for prediction of general ICU outcome was shown to be more reliable to predict “survival”. Age, inflammatory and thrombotic activity, and severity of ARDS at ICU admission were shown to be predictive of ICU survival. Patients’ age, pulmonary dysfunction and transfer from an external institution were predictors for ECMO therapy. The interaction of patient age with D-dimer levels on admission and creatinine levels with SOFA score without GCS were predictors for renal replacement therapy.
Conclusions
Using Explainable Boosting Machine analysis, we confirmed and weighed previously reported and identified novel predictors for outcome in critically ill COVID-19 patients. Using this strategy, predictive modeling of COVID-19 ICU patient outcomes can be performed overcoming the limitations of linear regression models.
Trial registration “ClinicalTrials” (clinicaltrials.gov) under NCT04455451.
Background
Recent data from the randomized SUSTAIN CSX trial could not confirm clinical benefits from perioperative selenium treatment in high-risk cardiac surgery patients. Underlying reasons may involve inadequate biosynthesis of glutathione peroxidase (GPx3), which is a key mediator of selenium's antioxidant effects. This secondary analysis aimed to identify patients with an increase in GPx3 activity following selenium treatment. We hypothesize that these responders might benefit from perioperative selenium treatment.
Methods
Patients were selected based on the availability of selenium biomarker information. Four subgroups were defined according to the patient's baseline status, including those with normal kidney function, reduced kidney function, selenium deficiency, and submaximal GPx3 activity.
Results
Two hundred and forty-four patients were included in this analysis. Overall, higher serum concentrations of selenium, selenoprotein P (SELENOP) and GPx3 were correlated with less organ injury. GPx3 activity at baseline was predictive of 6-month survival (AUC 0.73; p = 0.03). While selenium treatment elevated serum selenium and SELENOP concentrations but not GPx3 activity in the full patient cohort, subgroup analyses revealed that GPx3 activity increased in patients with reduced kidney function, selenium deficiency and low to moderate GPx3 activity. Clinical outcomes did not vary between selenium treatment and placebo in any of these subgroups, though the study was not powered to conclusively detect differences in outcomes.
Conclusions
The identification of GPx3 responders encourages further refined investigations into the treatment effects of selenium in high-risk cardiac surgery patients.
Background
Iron deficiency (ID) is the leading cause of anemia worldwide. The prevalence of preoperative ID ranges from 23 to 33%. Preoperative anemia is associated with worse outcomes, making it important to diagnose and treat ID before elective surgery. Several studies indicated the effectiveness of intravenous iron supplementation in iron deficiency with or without anemia (ID(A)). However, it remains challenging to establish reliable evidence due to heterogeneity in utilized study outcomes. The development of a core outcome set (COS) can help to reduce this heterogeneity by proposing a minimal set of meaningful and standardized outcomes. The aim of our systematic review was to identify and assess outcomes reported in randomized controlled trials (RCTs) and observational studies investigating iron supplementation in iron-deficient patients with or without anemia.
Methods
We searched MEDLINE, CENTRAL, and ClinicalTrials.gov systematically from 2000 to April 1, 2022. RCTs and observational studies investigating iron supplementation in patients with a preoperative diagnosis of ID(A), were included. Study characteristics and reported outcomes were extracted. Outcomes were categorized according to an established outcome taxonomy. Quality of outcome reporting was assessed with a pre-specified tool. Reported clinically relevant differences for sample size calculation were extracted.
Results
Out of 2898 records, 346 underwent full-text screening and 13 studies (five RCTs, eight observational studies) with sufficient diagnostic inclusion criteria for iron deficiency with or without anemia (ID(A)) were eligible. It is noteworthy to mention that 49 studies were excluded due to no confirmed diagnosis of ID(A). Overall, 111 outcomes were structured into five core areas including nine domains. Most studies (92%) reported outcomes within the ‘blood and lymphatic system’ domain, followed by “adverse event” (77%) and “need for further resources” (77%). All of the latter reported on the need for blood transfusion. Reported outcomes were heterogeneous in measures and timing. Merely, two (33%) of six prospective studies were registered prospectively of which one (17%) showed no signs of selective outcome reporting.
Conclusion
This systematic review comprehensively depicts the heterogeneity of reported outcomes in studies investigating iron supplementation in ID(A) patients regarding exact definitions and timing. Our analysis provides a systematic base for consenting to a minimal COS.
Systematic review registration
PROSPERO CRD42020214247
Background
Based on low-quality evidence, current nutrition guidelines recommend the delivery of high-dose protein in critically ill patients. The EFFORT Protein trial showed that higher protein dose is not associated with improved outcomes, whereas the effects in critically ill patients who developed acute kidney injury (AKI) need further evaluation. The overall aim is to evaluate the effects of high-dose protein in critically ill patients who developed different stages of AKI.
Methods
In this post hoc analysis of the EFFORT Protein trial, we investigated the effect of high versus usual protein dose (≥ 2.2 vs. ≤ 1.2 g/kg body weight/day) on time-to-discharge alive from the hospital (TTDA) and 60-day mortality and in different subgroups in critically ill patients with AKI as defined by the Kidney Disease Improving Global Outcomes (KDIGO) criteria within 7 days of ICU admission. The associations of protein dose with incidence and duration of kidney replacement therapy (KRT) were also investigated.
Results
Of the 1329 randomized patients, 312 developed AKI and were included in this analysis (163 in the high and 149 in the usual protein dose group). High protein was associated with a slower time-to-discharge alive from the hospital (TTDA) (hazard ratio 0.5, 95% CI 0.4–0.8) and higher 60-day mortality (relative risk 1.4 (95% CI 1.1–1.8). Effect modification was not statistically significant for any subgroup, and no subgroups suggested a beneficial effect of higher protein, although the harmful effect of higher protein target appeared to disappear in patients who received kidney replacement therapy (KRT). Protein dose was not significantly associated with the incidence of AKI and KRT or duration of KRT.
Conclusions
In critically ill patients with AKI, high protein may be associated with worse outcomes in all AKI stages. Recommendation of higher protein dosing in AKI patients should be carefully re-evaluated to avoid potential harmful effects especially in patients who were not treated with KRT.
Trial registration: This study is registered at ClinicalTrials.gov (NCT03160547) on May 17th 2017.
Background
Current COVID-19 guidelines recommend the early use of systemic corticoids for COVID-19 acute respiratory distress syndrome (ARDS). It remains unknown if high-dose methylprednisolone pulse therapy (MPT) ameliorates refractory COVID-19 ARDS after many days of mechanical ventilation or rapid deterioration with or without extracorporeal membrane oxygenation (ECMO).
Methods
This is a retrospective observational study. Consecutive patients with COVID-19 ARDS treated with a parenteral high-dose methylprednisolone pulse therapy at the intensive care units (ICU) of two University Hospitals between January 1st 2021 and November 30st 2022 were included. Clinical data collection was at ICU admission, start of MPT, 3-, 10- and 14-days post MPT.
Results
Thirty-seven patients (mean age 55 ± 12 years) were included in the study. MPT started at a mean of 17 ± 12 days after mechanical ventilation. Nineteen patients (54%) received ECMO support when commencing MPT. Mean paO2/FiO2 significantly improved 3- (p = 0.034) and 10 days (p = 0.0313) post MPT. The same applied to the necessary FiO2 10 days after MPT (p = 0.0240). There were no serious infectious complications. Twenty-four patients (65%) survived to ICU discharge, including 13 out of 20 (65%) needing ECMO support.
Conclusions
Late administration of high-dose MPT in a critical subset of refractory COVID-19 ARDS patients improved respiratory function and was associated with a higher-than-expected survival of 65%. These data suggest that high-dose MPT may be a viable salvage therapy in refractory COVID-19 ARDS.
Background
Data on the routine use of video-assisted laryngoscopy in peri-operative intubations are rather inconsistent and ambiguous, in part due to small populations and non-uniform outcome measures in past trials. Failed or prolonged intubation procedures are a reason for relevant morbidity and mortality. This study aims to determine whether video-assisted laryngoscopy (with both Macintosh-shaped and hyperangulated blades) is at least equal to the standard method of direct laryngoscopy with respect to the first-pass success rate. Furthermore, validated tools from the field of human factors will be applied to examine within-team communication and task load during this critical medical procedure.
Methods
In this randomized, controlled, three-armed parallel group design, multi-centre trial, a total of more than 2500 adult patients scheduled for perioperative endotracheal intubation will be randomized. In equally large arms, video-assisted laryngoscopy with a Macintosh-shaped or a hyperangulated blade will be compared to the standard of care (direct laryngoscopy with Macintosh blade). In a pre-defined hierarchical analysis, we will test the primary outcome for non-inferiority first. If this goal should be met, the design and projected statistical power also allow for subsequent testing for superiority of one of the interventions.
Various secondary outcomes will account for patient safety considerations as well as human factors interactions within the provider team and will allow for further exploratory data analysis and hypothesis generation.
Discussion
This randomized controlled trial will provide a solid base of data in a field where reliable evidence is of major clinical importance. With thousands of endotracheal intubations performed every day in operating rooms around the world, every bit of performance improvement translates into increased patient safety and comfort and may eventually prevent significant burden of disease. Therefore, we feel confident that a large trial has the potential to considerably benefit patients and anaesthetists alike.
Trial registration
ClincalTrials.gov NCT05228288.
Protocol version
1.1, November 15, 2021.
Long-term sequelae in hospitalized Coronavirus Disease 2019 (COVID-19) patients may result in limited quality of life. The current study aimed to determine health-related quality of life (HRQoL) after COVID-19 hospitalization in non-intensive care unit (ICU) and ICU patients. This is a single-center study at the University Hospital of Wuerzburg, Germany. Patients eligible were hospitalized with COVID-19 between March 2020 and December 2020. Patients were interviewed 3 and 12 months after hospital discharge. Questionnaires included the European Quality of Life 5 Dimensions 5 Level (EQ-5D-5L), patient health questionnaire-9 (PHQ-9), the generalized anxiety disorder 7 scale (GAD-7), FACIT fatigue scale, perceived stress scale (PSS-10) and posttraumatic symptom scale 10 (PTSS-10). 85 patients were included in the study. The EQ5D-5L-Index significantly differed between non-ICU (0.78 ± 0.33 and 0.84 ± 0.23) and ICU (0.71 ± 0.27; 0.74 ± 0.2) patients after 3- and 12-months. Of non-ICU 87% and 80% of ICU survivors lived at home without support after 12 months. One-third of ICU and half of the non-ICU patients returned to work. A higher percentage of ICU patients was limited in their activities of daily living compared to non-ICU patients. Depression and fatigue were present in one fifth of the ICU patients. Stress levels remained high with only 24% of non-ICU and 3% of ICU patients (p = 0.0186) having low perceived stress. Posttraumatic symptoms were present in 5% of non-ICU and 10% of ICU patients. HRQoL is limited in COVID-19 ICU patients 3- and 12-months post COVID-19 hospitalization, with significantly less improvement at 12-months compared to non-ICU patients. Mental disorders were common highlighting the complexity of post-COVID-19 symptoms as well as the necessity to educate patients and primary care providers about monitoring mental well-being post COVID-19.
Summary
Blood oxygen saturation is an important clinical parameter, especially in postoperative hospitalized patients, monitored in clinical practice by arterial blood gas (ABG) and/or pulse oximetry that both are not suitable for a long-term continuous monitoring of patients during the entire hospital stay, or beyond. Technological advances developed recently for consumer-grade fitness trackers could—at least in theory—help to fill in this gap, but benchmarks on the applicability and accuracy of these technologies in hospitalized patients are currently lacking. We therefore conducted at the postanaesthesia care unit under controlled settings a prospective clinical trial with 201 patients, comparing in total >1,000 oxygen blood saturation measurements by fitness trackers of three brands with the ABG gold standard and with pulse oximetry. Our results suggest that, despite of an overall still tolerable measuring accuracy, comparatively high dropout rates severely limit the possibilities of employing fitness trackers, particularly during the immediate postoperative period of hospitalized patients.
Highlights
•The accuracy of O2 measurements by fitness trackers is tolerable (RMSE ≲4%)
•Correlation with arterial blood gas measurements is fair to moderate (PCC = [0.46; 0.64])
•Dropout rates of fitness trackers during O2 monitoring are high (∼1/3 values missing)
•Fitness trackers cannot be recommended for O2 measuring during critical monitoring
Interpreting blood gas analysis results can be challenging for the clinician, especially in stressful situations under time pressure. To foster fast and correct interpretation of blood gas results, we developed Visual Blood. This computer-based, multicentre, noninferiority study compared Visual Blood and conventional arterial blood gas (ABG) printouts. We presented six scenarios to anaesthesiologists, once with Visual Blood and once with the conventional ABG printout. The primary outcome was ABG parameter perception. The secondary outcomes included correct clinical diagnoses, perceived diagnostic confidence, and perceived workload. To analyse the results, we used mixed models and matched odds ratios. Analysing 300 within-subject cases, we showed noninferiority of Visual Blood compared to ABG printouts concerning the rate of correctly perceived ABG parameters (rate ratio, 0.96; 95% CI, 0.92–1.00; p = 0.06). Additionally, the study revealed two times higher odds of making the correct clinical diagnosis using Visual Blood (OR, 2.16; 95% CI, 1.42–3.29; p < 0.001) than using ABG printouts. There was no or, respectively, weak evidence for a difference in diagnostic confidence (OR, 0.84; 95% CI, 0.58–1.21; p = 0.34) and perceived workload (Coefficient, 2.44; 95% CI, −0.09–4.98; p = 0.06). This study showed that participants did not perceive the ABG parameters better, but using Visual Blood resulted in more correct clinical diagnoses than using conventional ABG printouts. This suggests that Visual Blood allows for a higher level of situation awareness beyond individual parameters’ perception. However, the study also highlighted the limitations of today’s virtual reality headsets and Visual Blood.