Refine
Has Fulltext
- yes (24)
Is part of the Bibliography
- yes (24)
Year of publication
Document Type
- Journal article (24)
Keywords
- heart failure (3)
- ischemic stroke (3)
- Germany (2)
- biomarkers (2)
- cardiac surgery (2)
- chronic kidney disease (2)
- coronary heart disease (2)
- guideline adherence (2)
- myocardial work (2)
- quality of life (2)
Institute
- Medizinische Klinik und Poliklinik I (24) (remove)
Sonstige beteiligte Institutionen
Background and Purpose
In animal models, von Willebrand factor (VWF) is involved in thrombus formation and propagation of ischemic stroke. However, the pathophysiological relevance of this molecule in humans, and its potential use as a biomarker for the risk and severity of ischemic stroke remains unclear. This study had two aims: to identify predictors of altered VWF levels and to examine whether VWF levels differ between acute cerebrovascular events and chronic cerebrovascular disease (CCD).
Methods
A case–control study was undertaken between 2010 and 2013 at our University clinic. In total, 116 patients with acute ischemic stroke (AIS) or transitory ischemic attack (TIA), 117 patients with CCD, and 104 healthy volunteers (HV) were included. Blood was taken at days 0, 1, and 3 in patients with AIS or TIA, and once in CCD patients and HV. VWF serum levels were measured and correlated with demographic and clinical parameters by multivariate linear regression and ANOVA.
Results
Patients with CCD (158±46%) had significantly higher VWF levels than HV (113±36%, P<0.001), but lower levels than AIS/TIA patients (200±95%, P<0.001). Age, sex, and stroke severity influenced VWF levels (P<0.05).
Conclusions
VWF levels differed across disease subtypes and patient characteristics. Our study confirms increased VWF levels as a risk factor for cerebrovascular disease and, moreover, suggests that it may represent a potential biomarker for stroke severity, warranting further investigation.
Background/Aims:
Acute kidney injury (AKI) is a postoperative complication after cardiac surgery with a high impact on mortality and morbidity. Nephrocheck® [TIMP-2*IGFBP7] determines markers of tubular stress, which occurs prior to tubular damage. It is unknown at which time-point [TIMP-2*IGFBP7] measurement should be performed to ideally predict AKI. We investigated the association of [TIMP-2*IGFBP7] at various time-points with the incidence of AKI in patients undergoing elective cardiac surgery including cardio-pulmonary bypass.
Methods: In a prospective cohort study, serial blood and urine samples were collected from 150 patients: pre-operative, at ICU-admission, 24h and 48h post-surgery. AKI was defined as Serum-Creatinine rise >0.3 mg/dl within 48hrs. Urinary [TIMP-2*IGFBP7] was measured at pre-operative, ICU-admission and 24h post-surgery; medical staff was kept blinded to these results.
Results: A total of 35 patients (23.5%) experienced AKI, with a higher incidence in those with high [TIMP-2*IGFBP7] values at ICU admission (57.1% vs. 10.1%, p<0.001). In logistic regression [TIMP-2*IGFBP7] at ICU admission was independently associated with the occurrence of AKI (Odds Ratio 11.83; p<0.001, C-statistic= 0.74) after adjustment for EuroSCORE II and CBP-time.
Conclusions: Early detection of elevated [TIMP-2*IGFBP7] at ICU admission was strongly predictive for postoperative AKI and appeared to be more precise as compared to subsequent measurements.
We assume that a specific health constraint, e.g., a certain aspect of bodily function or quality of life that is measured by a variable X, is absent (or irrelevant) in a healthy reference population (Ref0), and it is materially present and precisely measured in a diseased reference population (Ref1). We further assume that some amount of this constraint of interest is suspected to be present in a population under study (SP). In order to quantify this issue, we propose the introduction of an intuitive measure, the population comparison index (PCI), that relates the mean value of X in population SP to the mean values of X in populations Ref0 and Ref1. This measure is defined as PCI[X] = (mean[X|SP] − mean[X|Ref0])/(mean[X|Ref1] − mean[X|Ref0]) × 100[%], where mean[X|.] is the average value of X in the respective group of individuals. For interpretation, PCI[X] ≈ 0 indicates that the values of X in the population SP are similar to those in population Ref0, and hence, the impairment measured by X is not materially present in the individuals in population SP. On the other hand, PCI[X] ≈ 100 means that the individuals in SP exhibit values of X comparable to those occurring in Ref1, i.e., the constraint of interest is equally present in populations SP and Ref1. A value of 0 < PCI[X] < 100 indicates that a certain percentage of the constraint is present in SP, and it is more than in Ref0 but less than in Ref1. A value of PCI[X] > 100 means that population SP is even more affected by the constraint than population Ref1.
Background:
The Catechol-O-methyltransferase (COMT) represents the key enzyme in catecholamine degradation. Recent studies suggest that the COMT rs4680 polymorphism is associated with the response to endogenous and exogenous catecholamines. There are, however, conflicting data regarding the COMT Met/Met phenotype being associated with an increased risk of acute kidney injury (AKI) after cardiac surgery. The aim of the current study is to prospectively investigate the impact of the COMT rs4680 polymorphism on the incidence of AKI in patients undergoing cardiac surgery.
Methods:
In this prospective single center cohort study consecutive patients hospitalized for elective cardiac surgery including cardiopulmonary-bypass (CPB) were screened for participation. Demographic clinical data, blood, urine and tissue samples were collected at predefined time points throughout the clinical stay. AKI was defined according to recent recommendations of the Kidney Disease Improving Global Outcome (KDIGO) group. Genetic analysis was performed after patient enrolment was completed.
Results:
Between April and December 2014, 150 patients were recruited. The COMT genotypes were distributed as follows: Val/Met 48.7%, Met/Met 29.3%, Val/Val 21.3%. No significant differences were found for demography, comorbidities, or operative strategy according to the underlying COMT genotype. AKI occurred in 35 patients (23.5%) of the total cohort, and no differences were evident between the COMT genotypes (20.5% Met/Met, 24.7% Val/Met, 25.0% Val/Val, p = 0.66). There were also no differences in the post-operative period, including ICU or in-hospital stay.
Conclusions:
We did not find statistically significant variations in the risk for postoperative AKI, length of ICU or in-hospital stay according to the underlying COMT genotype.
Risk prediction in patients with heart failure (HF) is essential to improve the tailoring of preventive, diagnostic, and therapeutic strategies for the individual patient, and effectively use health care resources. Risk scores derived from controlled clinical studies can be used to calculate the risk of mortality and HF hospitalizations. However, these scores are poorly implemented into routine care, predominantly because their calculation requires considerable efforts in practice and necessary data often are not available in an interoperable format. In this work, we demonstrate the feasibility of a multi-site solution to derive and calculate two exemplary HF scores from clinical routine data (MAGGIC score with six continuous and eight categorical variables; Barcelona Bio-HF score with five continuous and six categorical variables). Within HiGHmed, a German Medical Informatics Initiative consortium, we implemented an interoperable solution, collecting a harmonized HF-phenotypic core data set (CDS) within the openEHR framework. Our approach minimizes the need for manual data entry by automatically retrieving data from primary systems. We show, across five participating medical centers, that the implemented structures to execute dedicated data queries, followed by harmonized data processing and score calculation, work well in practice. In summary, we demonstrated the feasibility of clinical routine data usage across multiple partner sites to compute HF risk scores. This solution can be extended to a large spectrum of applications in clinical care.
Background
The guideline recommendation to not measure carotid intima-media thickness (CIMT) for cardiovascular risk prediction is based on the assessment of just one single carotid segment. We evaluated whether there is a segment-specific association between different measurement locations of CIMT and cardiovascular risk factors.
Methods
Subjects from the population-based STAAB cohort study comprising subjects aged 30 to 79 years of the general population from Würzburg, Germany, were investigated. CIMT was measured on the far wall of both sides in three different predefined locations: common carotid artery (CCA), bulb, and internal carotid artery (ICA). Diabetes, dyslipidemia, hypertension, smoking, and obesity were considered as risk factors. In multivariable logistic regression analysis, odds ratios of risk factors per location were estimated for the endpoint of individual age- and sex-adjusted 75th percentile of CIMT.
Results
2492 subjects were included in the analysis. Segment-specific CIMT was highest in the bulb, followed by CCA, and lowest in the ICA. Dyslipidemia, hypertension, and smoking were associated with CIMT, but not diabetes and obesity. We observed no relevant segment-specific association between the three different locations and risk factors, except for a possible interaction between smoking and ICA.
Conclusions
As no segment-specific association between cardiovascular risk factors and CIMT became evident, one simple measurement of one location may suffice to assess the cardiovascular risk of an individual.
Background: Animal models have implicated an integral role for coagulation factors XI (FXI) and XII (FXII) in thrombus formation and propagation of ischemic stroke (IS). However, it is unknown if these molecules contribute to IS pathophysiology in humans, and might be of use as biomarkers for IS risk and severity. This study aimed to identify predictors of altered FXI and FXII levels and to determine whether there are differences in the levels of these coagulation factors between acute cerebrovascular events and chronic cerebrovascular disease (CCD). Methods: In this case-control study, 116 patients with acute ischemic stroke (AIS) or transitory ischemic attack (TIA), 117 patients with CCD, and 104 healthy volunteers (HVs) were enrolled between 2010 and 2013 at our University hospital. Blood sampling was undertaken once in the CCD and HV groups and on days 0, 1, and 3 after stroke onset in patients with AIS or TIA. Correlations between serum FXI and FXII levels and demographic and clinical parameters were tested by linear regression and analysis of variance. Results: The mean age of AIS/TIA patients was 70 ± 12. Baseline clinical severity measured with NIHSS and Barthel Index was 4.8 ± 6.0 and 74 ± 30, respectively. More than half of the patients had an AIS (58%). FXI levels were significantly correlated with different leukocyte subsets (p < 0.05). In contrast, FXII serum levels showed no significant correlation (p > 0.1). Neither FXI nor FXII levels correlated with CRP (p > 0.2). FXII levels were significantly higher in patients with CCD compared with those with AIS/TIA (mean ± SD 106 ± 26% vs. 97 ± 24%; univariate analysis: p < 0.05); these differences did not reach significance in multivariate analysis adjusted for sex and age. FXI levels did not differ significantly between study groups. Sex and age were significantly associated with FXI and/or FXII levels in patients with AIS/TIA (p < 0.05). In contrast, no statistical significant influence was found for treatment modality (thrombolysis or not), pre-treatment with platelet inhibitors, and severity of stroke. Conclusions: In this study, there was no differential regulation of FXI and FXII levels between disease subtypes but biomarker levels were associated with patient and clinical characteristics. FXI and FXII levels might be no valid biomarker for predicting stroke risk.
Long-term sequelae in hospitalized Coronavirus Disease 2019 (COVID-19) patients may result in limited quality of life. The current study aimed to determine health-related quality of life (HRQoL) after COVID-19 hospitalization in non-intensive care unit (ICU) and ICU patients. This is a single-center study at the University Hospital of Wuerzburg, Germany. Patients eligible were hospitalized with COVID-19 between March 2020 and December 2020. Patients were interviewed 3 and 12 months after hospital discharge. Questionnaires included the European Quality of Life 5 Dimensions 5 Level (EQ-5D-5L), patient health questionnaire-9 (PHQ-9), the generalized anxiety disorder 7 scale (GAD-7), FACIT fatigue scale, perceived stress scale (PSS-10) and posttraumatic symptom scale 10 (PTSS-10). 85 patients were included in the study. The EQ5D-5L-Index significantly differed between non-ICU (0.78 ± 0.33 and 0.84 ± 0.23) and ICU (0.71 ± 0.27; 0.74 ± 0.2) patients after 3- and 12-months. Of non-ICU 87% and 80% of ICU survivors lived at home without support after 12 months. One-third of ICU and half of the non-ICU patients returned to work. A higher percentage of ICU patients was limited in their activities of daily living compared to non-ICU patients. Depression and fatigue were present in one fifth of the ICU patients. Stress levels remained high with only 24% of non-ICU and 3% of ICU patients (p = 0.0186) having low perceived stress. Posttraumatic symptoms were present in 5% of non-ICU and 10% of ICU patients. HRQoL is limited in COVID-19 ICU patients 3- and 12-months post COVID-19 hospitalization, with significantly less improvement at 12-months compared to non-ICU patients. Mental disorders were common highlighting the complexity of post-COVID-19 symptoms as well as the necessity to educate patients and primary care providers about monitoring mental well-being post COVID-19.
Background
Published models predicting nasal colonization with Methicillin-resistant Staphylococcus aureus among hospital admissions predominantly focus on separation of carriers from non-carriers and are frequently evaluated using measures of discrimination. In contrast, accurate estimation of carriage probability, which may inform decisions regarding treatment and infection control, is rarely assessed. Furthermore, no published models adjust for MRSA prevalence.
Methods
Using logistic regression, a scoring system (values from 0 to 200) predicting nasal carriage of MRSA was created using a derivation cohort of 3091 individuals admitted to a European tertiary referral center between July 2007 and March 2008. The expected positive predictive value of a rapid diagnostic test (GeneOhm, Becton & Dickinson Co.) was modeled using non-linear regression according to score. Models were validated on a second cohort from the same hospital consisting of 2043 patients admitted between August 2008 and January 2012. Our suggested correction score for prevalence was proportional to the log-transformed odds ratio between cohorts. Calibration before and after correction, i.e. accurate classification into arbitrary strata, was assessed with the Hosmer-Lemeshow-Test.
Results
Treating culture as reference, the rapid diagnostic test had positive predictive values of 64.8% and 54.0% in derivation and internal validation corhorts with prevalences of 2.3% and 1.7%, respectively. In addition to low prevalence, low positive predictive values were due to high proportion (> 66%) of mecA-negative Staphylococcus aureus among false positive results. Age, nursing home residence, admission through the medical emergency department, and ICD-10-GM admission diagnoses starting with “A” or “J” were associated with MRSA carriage and were thus included in the scoring system, which showed good calibration in predicting probability of carriage and the rapid diagnostic test’s expected positive predictive value. Calibration for both probability of carriage and expected positive predictive value in the internal validation cohort was improved by applying the correction score.
Conclusions
Given a set of patient parameters, the presented models accurately predict a) probability of nasal carriage of MRSA and b) a rapid diagnostic test’s expected positive predictive value. While the former can inform decisions regarding empiric antibiotic treatment and infection control, the latter can influence choice of screening method.
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008–11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40–65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk.