Refine
Has Fulltext
- yes (59)
Is part of the Bibliography
- yes (59)
Document Type
- Journal article (59)
Keywords
- COVID-19 (6)
- blood-brain barrier (4)
- critical care (4)
- acute respiratory distress syndrome (3)
- breast cancer (3)
- iron deficiency (3)
- patient blood management (3)
- ARDS (2)
- ECMO-Therapie (2)
- SARS-CoV-2 (2)
Institute
- Klinik und Poliklinik für Anästhesiologie (ab 2004) (59)
- Frauenklinik und Poliklinik (7)
- Medizinische Klinik und Poliklinik I (7)
- Institut für Klinische Epidemiologie und Biometrie (6)
- Neurochirurgische Klinik und Poliklinik (3)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (2)
- Institut für Hygiene und Mikrobiologie (2)
- Institut für Informatik (2)
- Kinderklinik und Poliklinik (2)
- Klinik und Poliklinik für Allgemein-, Viszeral-, Gefäß- und Kinderchirurgie (Chirurgische Klinik I) (2)
Sonstige beteiligte Institutionen
Purpose
Anaemia is common in patients presenting with aneurysmal subarachnoid (aSAH) and intracerebral haemorrhage (ICH). In surgical patients, anaemia was identified as an idenpendent risk factor for postoperative mortality, prolonged hospital length of stay (LOS) and increased risk of red blood cell (RBC) transfusion. This multicentre cohort observation study describes the incidence and effects of preoperative anaemia in this critical patient collective for a 10-year period.
Methods
This multicentre observational study included adult in-hospital surgical patients diagnosed with aSAH or ICH of 21 German hospitals (discharged from 1 January 2010 to 30 September 2020). Descriptive, univariate and multivariate analyses were performed to investigate the incidence and association of preoperative anaemia with RBC transfusion, in-hospital mortality and postoperative complications in patients with aSAH and ICH.
Results
A total of n = 9081 patients were analysed (aSAH n = 5008; ICH n = 4073). Preoperative anaemia was present at 28.3% in aSAH and 40.9% in ICH. RBC transfusion rates were 29.9% in aSAH and 29.3% in ICH. Multivariate analysis revealed that preoperative anaemia is associated with a higher risk for RBC transfusion (OR = 3.25 in aSAH, OR = 4.16 in ICH, p < 0.001), for in-hospital mortality (OR = 1.48 in aSAH, OR = 1.53 in ICH, p < 0.001) and for several postoperative complications.
Conclusions
Preoperative anaemia is associated with increased RBC transfusion rates, in-hospital mortality and postoperative complications in patients with aSAH and ICH.
The biomedical consequences of allogeneic blood transfusions and the possible pathomechanisms of transfusion-related morbidity and mortality are still not entirely understood. In retrospective studies, allogeneic transfusion was associated with increased rates of cancer recurrence, metastasis and death in patients with colorectal cancer. However, correlation does not imply causation. The purpose of this study was to elucidate this empirical observation further in order to address insecurity among patients and clinicians. We focused on the in vitro effect of microparticles derived from red blood cell units (RMPs). We incubated different colon carcinoma cells with RMPs and analyzed their effects on growth, invasion, migration and tumor marker expression. Furthermore, effects on Wnt, Akt and ERK signaling were explored. Our results show RMPs do not seem to affect functional and phenotypic characteristics of different colon carcinoma cells and did not induce or inhibit Wnt, Akt or ERK signaling, albeit in cell culture models lacking tumor microenvironment. Allogeneic blood transfusions are associated with poor prognosis, but RMPs do not seem to convey tumor-enhancing effects. Most likely, the circumstances that necessitate the transfusion, such as preoperative anemia, tumor stage, perioperative blood loss and extension of surgery, take center stage.
Artificial intelligence (AI) is predicted to play an increasingly important role in perioperative medicine in the very near future. However, little is known about what anesthesiologists know and think about AI in this context. This is important because the successful introduction of new technologies depends on the understanding and cooperation of end users. We sought to investigate how much anesthesiologists know about AI and what they think about the introduction of AI-based technologies into the clinical setting. In order to better understand what anesthesiologists think of AI, we recruited 21 anesthesiologists from 2 university hospitals for face-to-face structured interviews. The interview transcripts were subdivided sentence-by-sentence into discrete statements, and statements were then grouped into key themes. Subsequently, a survey of closed questions based on these themes was sent to 70 anesthesiologists from 3 university hospitals for rating. In the interviews, the base level of knowledge of AI was good at 86 of 90 statements (96%), although awareness of the potential applications of AI in anesthesia was poor at only 7 of 42 statements (17%). Regarding the implementation of AI in anesthesia, statements were split roughly evenly between pros (46 of 105, 44%) and cons (59 of 105, 56%). Interviewees considered that AI could usefully be used in diverse tasks such as risk stratification, the prediction of vital sign changes, or as a treatment guide. The validity of these themes was probed in a follow-up survey of 70 anesthesiologists with a response rate of 70%, which confirmed an overall positive view of AI in this group. Anesthesiologists hold a range of opinions, both positive and negative, regarding the application of AI in their field of work. Survey-based studies do not always uncover the full breadth of nuance of opinion amongst clinicians. Engagement with specific concerns, both technical and ethical, will prove important as this technology moves from research to the clinic.
Visual Blood, a 3D animated computer model to optimize the interpretation of blood gas analysis
(2023)
Acid–base homeostasis is crucial for all physiological processes in the body and is evaluated using arterial blood gas (ABG) analysis. Screens or printouts of ABG results require the interpretation of many textual elements and numbers, which may delay intuitive comprehension. To optimise the presentation of the results for the specific strengths of human perception, we developed Visual Blood, an animated virtual model of ABG results. In this study, we compared its performance with a conventional result printout. Seventy physicians from three European university hospitals participated in a computer-based simulation study. Initially, after an educational video, we tested the participants’ ability to assign individual Visual Blood visualisations to their corresponding ABG parameters. As the primary outcome, we tested caregivers’ ability to correctly diagnose simulated clinical ABG scenarios with Visual Blood or conventional ABG printouts. For user feedback, participants rated their agreement with statements at the end of the study. Physicians correctly assigned 90% of the individual Visual Blood visualisations. Regarding the primary outcome, the participants made the correct diagnosis 86% of the time when using Visual Blood, compared to 68% when using the conventional ABG printout. A mixed logistic regression model showed an odds ratio for correct diagnosis of 3.4 (95%CI 2.00–5.79, p < 0.001) and an odds ratio for perceived diagnostic confidence of 1.88 (95%CI 1.67–2.11, p < 0.001) in favour of Visual Blood. A linear mixed model showed a coefficient for perceived workload of −3.2 (95%CI −3.77 to −2.64) in favour of Visual Blood. Fifty-one of seventy (73%) participants agreed or strongly agreed that Visual Blood was easy to use, and fifty-five of seventy (79%) agreed that it was fun to use. In conclusion, Visual Blood improved physicians’ ability to diagnose ABG results. It also increased perceived diagnostic confidence and reduced perceived workload. This study adds to the growing body of research showing that decision-support tools developed around human cognitive abilities can streamline caregivers’ decision-making and may improve patient care.
Interpreting blood gas analysis results can be challenging for the clinician, especially in stressful situations under time pressure. To foster fast and correct interpretation of blood gas results, we developed Visual Blood. This computer-based, multicentre, noninferiority study compared Visual Blood and conventional arterial blood gas (ABG) printouts. We presented six scenarios to anaesthesiologists, once with Visual Blood and once with the conventional ABG printout. The primary outcome was ABG parameter perception. The secondary outcomes included correct clinical diagnoses, perceived diagnostic confidence, and perceived workload. To analyse the results, we used mixed models and matched odds ratios. Analysing 300 within-subject cases, we showed noninferiority of Visual Blood compared to ABG printouts concerning the rate of correctly perceived ABG parameters (rate ratio, 0.96; 95% CI, 0.92–1.00; p = 0.06). Additionally, the study revealed two times higher odds of making the correct clinical diagnosis using Visual Blood (OR, 2.16; 95% CI, 1.42–3.29; p < 0.001) than using ABG printouts. There was no or, respectively, weak evidence for a difference in diagnostic confidence (OR, 0.84; 95% CI, 0.58–1.21; p = 0.34) and perceived workload (Coefficient, 2.44; 95% CI, −0.09–4.98; p = 0.06). This study showed that participants did not perceive the ABG parameters better, but using Visual Blood resulted in more correct clinical diagnoses than using conventional ABG printouts. This suggests that Visual Blood allows for a higher level of situation awareness beyond individual parameters’ perception. However, the study also highlighted the limitations of today’s virtual reality headsets and Visual Blood.
Summary
Blood oxygen saturation is an important clinical parameter, especially in postoperative hospitalized patients, monitored in clinical practice by arterial blood gas (ABG) and/or pulse oximetry that both are not suitable for a long-term continuous monitoring of patients during the entire hospital stay, or beyond. Technological advances developed recently for consumer-grade fitness trackers could—at least in theory—help to fill in this gap, but benchmarks on the applicability and accuracy of these technologies in hospitalized patients are currently lacking. We therefore conducted at the postanaesthesia care unit under controlled settings a prospective clinical trial with 201 patients, comparing in total >1,000 oxygen blood saturation measurements by fitness trackers of three brands with the ABG gold standard and with pulse oximetry. Our results suggest that, despite of an overall still tolerable measuring accuracy, comparatively high dropout rates severely limit the possibilities of employing fitness trackers, particularly during the immediate postoperative period of hospitalized patients.
Highlights
•The accuracy of O2 measurements by fitness trackers is tolerable (RMSE ≲4%)
•Correlation with arterial blood gas measurements is fair to moderate (PCC = [0.46; 0.64])
•Dropout rates of fitness trackers during O2 monitoring are high (∼1/3 values missing)
•Fitness trackers cannot be recommended for O2 measuring during critical monitoring
Background
Iron deficiency (ID) is the leading cause of anemia worldwide. The prevalence of preoperative ID ranges from 23 to 33%. Preoperative anemia is associated with worse outcomes, making it important to diagnose and treat ID before elective surgery. Several studies indicated the effectiveness of intravenous iron supplementation in iron deficiency with or without anemia (ID(A)). However, it remains challenging to establish reliable evidence due to heterogeneity in utilized study outcomes. The development of a core outcome set (COS) can help to reduce this heterogeneity by proposing a minimal set of meaningful and standardized outcomes. The aim of our systematic review was to identify and assess outcomes reported in randomized controlled trials (RCTs) and observational studies investigating iron supplementation in iron-deficient patients with or without anemia.
Methods
We searched MEDLINE, CENTRAL, and ClinicalTrials.gov systematically from 2000 to April 1, 2022. RCTs and observational studies investigating iron supplementation in patients with a preoperative diagnosis of ID(A), were included. Study characteristics and reported outcomes were extracted. Outcomes were categorized according to an established outcome taxonomy. Quality of outcome reporting was assessed with a pre-specified tool. Reported clinically relevant differences for sample size calculation were extracted.
Results
Out of 2898 records, 346 underwent full-text screening and 13 studies (five RCTs, eight observational studies) with sufficient diagnostic inclusion criteria for iron deficiency with or without anemia (ID(A)) were eligible. It is noteworthy to mention that 49 studies were excluded due to no confirmed diagnosis of ID(A). Overall, 111 outcomes were structured into five core areas including nine domains. Most studies (92%) reported outcomes within the ‘blood and lymphatic system’ domain, followed by “adverse event” (77%) and “need for further resources” (77%). All of the latter reported on the need for blood transfusion. Reported outcomes were heterogeneous in measures and timing. Merely, two (33%) of six prospective studies were registered prospectively of which one (17%) showed no signs of selective outcome reporting.
Conclusion
This systematic review comprehensively depicts the heterogeneity of reported outcomes in studies investigating iron supplementation in ID(A) patients regarding exact definitions and timing. Our analysis provides a systematic base for consenting to a minimal COS.
Systematic review registration
PROSPERO CRD42020214247
Background
Based on low-quality evidence, current nutrition guidelines recommend the delivery of high-dose protein in critically ill patients. The EFFORT Protein trial showed that higher protein dose is not associated with improved outcomes, whereas the effects in critically ill patients who developed acute kidney injury (AKI) need further evaluation. The overall aim is to evaluate the effects of high-dose protein in critically ill patients who developed different stages of AKI.
Methods
In this post hoc analysis of the EFFORT Protein trial, we investigated the effect of high versus usual protein dose (≥ 2.2 vs. ≤ 1.2 g/kg body weight/day) on time-to-discharge alive from the hospital (TTDA) and 60-day mortality and in different subgroups in critically ill patients with AKI as defined by the Kidney Disease Improving Global Outcomes (KDIGO) criteria within 7 days of ICU admission. The associations of protein dose with incidence and duration of kidney replacement therapy (KRT) were also investigated.
Results
Of the 1329 randomized patients, 312 developed AKI and were included in this analysis (163 in the high and 149 in the usual protein dose group). High protein was associated with a slower time-to-discharge alive from the hospital (TTDA) (hazard ratio 0.5, 95% CI 0.4–0.8) and higher 60-day mortality (relative risk 1.4 (95% CI 1.1–1.8). Effect modification was not statistically significant for any subgroup, and no subgroups suggested a beneficial effect of higher protein, although the harmful effect of higher protein target appeared to disappear in patients who received kidney replacement therapy (KRT). Protein dose was not significantly associated with the incidence of AKI and KRT or duration of KRT.
Conclusions
In critically ill patients with AKI, high protein may be associated with worse outcomes in all AKI stages. Recommendation of higher protein dosing in AKI patients should be carefully re-evaluated to avoid potential harmful effects especially in patients who were not treated with KRT.
Trial registration: This study is registered at ClinicalTrials.gov (NCT03160547) on May 17th 2017.
Background
Data on the routine use of video-assisted laryngoscopy in peri-operative intubations are rather inconsistent and ambiguous, in part due to small populations and non-uniform outcome measures in past trials. Failed or prolonged intubation procedures are a reason for relevant morbidity and mortality. This study aims to determine whether video-assisted laryngoscopy (with both Macintosh-shaped and hyperangulated blades) is at least equal to the standard method of direct laryngoscopy with respect to the first-pass success rate. Furthermore, validated tools from the field of human factors will be applied to examine within-team communication and task load during this critical medical procedure.
Methods
In this randomized, controlled, three-armed parallel group design, multi-centre trial, a total of more than 2500 adult patients scheduled for perioperative endotracheal intubation will be randomized. In equally large arms, video-assisted laryngoscopy with a Macintosh-shaped or a hyperangulated blade will be compared to the standard of care (direct laryngoscopy with Macintosh blade). In a pre-defined hierarchical analysis, we will test the primary outcome for non-inferiority first. If this goal should be met, the design and projected statistical power also allow for subsequent testing for superiority of one of the interventions.
Various secondary outcomes will account for patient safety considerations as well as human factors interactions within the provider team and will allow for further exploratory data analysis and hypothesis generation.
Discussion
This randomized controlled trial will provide a solid base of data in a field where reliable evidence is of major clinical importance. With thousands of endotracheal intubations performed every day in operating rooms around the world, every bit of performance improvement translates into increased patient safety and comfort and may eventually prevent significant burden of disease. Therefore, we feel confident that a large trial has the potential to considerably benefit patients and anaesthetists alike.
Trial registration
ClincalTrials.gov NCT05228288.
Protocol version
1.1, November 15, 2021.