Refine
Has Fulltext
- yes (2)
Is part of the Bibliography
- yes (2)
Year of publication
- 2019 (2) (remove)
Document Type
- Journal article (2) (remove)
Language
- English (2) (remove)
Keywords
- Human behaviour (1)
- Personalized medicine (1)
- Prognostic markers (1)
- Psychiatric disorders (1)
- insurance medicine (1)
- peer review (1)
- quality assurance (1)
- reliability (1)
- work capacity evaluation (1)
Major depressive disorder and the anxiety disorders are highly prevalent, disabling and moderately heritable. Depression and anxiety are also highly comorbid and have a strong genetic correlation (r(g) approximate to 1). Cognitive behavioural therapy is a leading evidence-based treatment but has variable outcomes. Currently, there are no strong predictors of outcome. Therapygenetics research aims to identify genetic predictors of prognosis following therapy. We performed genome-wide association meta-analyses of symptoms following cognitive behavioural therapy in adults with anxiety disorders (n = 972), adults with major depressive disorder (n = 832) and children with anxiety disorders (n = 920; meta-analysis n = 2724). We (h(SNP)(2)) and polygenic scoring was used to examine genetic associations between therapy outcomes and psychopathology, personality and estimated the variance in therapy outcomes that could be explained by common genetic variants learning. No single nucleotide polymorphisms were strongly associated with treatment outcomes. No significant estimate of h(SNP)(2) could be obtained, suggesting the heritability of therapy outcome is smaller than our analysis was powered to detect. Polygenic scoring failed to detect genetic overlap between therapy outcome and psychopathology, personality or learning. This study is the largest therapygenetics study to date. Results are consistent with previous, similarly powered genome-wide association studies of complex traits.
Background:
Employees insured in pension insurance, who are incapable of working due to ill health, are entitled to a disability pension. To assess whether an individual meets the medical requirements to be considered as disabled, a work capacity evaluation is conducted. However, there are no official guidelines on how to perform an external quality assurance for this evaluation process. Furthermore, the quality of medical reports in the field of insurance medicine can vary substantially, and systematic evaluations are scarce. Reliability studies using peer review have repeatedly shown insufficient ability to distinguish between high, moderate and low quality. Considering literature recommendations, we developed an instrument to examine the quality of medical experts’reports.
Methods:
The peer review manual developed contains six quality domains (formal structure, clarity, transparency, completeness, medical-scientific principles, and efficiency) comprising 22 items. In addition, a superordinate criterion (survey confirmability) rank the overall quality and usefulness of a report. This criterion evaluates problems of innerlogic and reasoning. Development of the manual was assisted by experienced physicians in a pre-test. We examined the observable variance in peer judgements and reliability as the most important outcome criteria. To evaluate inter-rater reliability, 20 anonymous experts’ reports detailing the work capacity evaluation were reviewed by 19 trained raters (peers). Percentage agreement and Kendall’s W, a reliability measure of concordance between two or more peers, were calculated. A total of 325 reviews were conducted.
Results:
Agreement of peer judgements with respect to the superordinate criterion ranged from 29.2 to 87.5%. Kendall’s W for the quality domain items varied greatly, ranging from 0.09 to 0.88. With respect to the superordinate criterion, Kendall’s W was 0.39, which indicates fair agreement. The results of the percentage agreement revealed systemic peer preferences for certain deficit scale categories.
Conclusion:
The superordinate criterion was not sufficiently reliable. However, in comparison to other reliability studies, this criterion showed an equivalent reliability value. This report aims to encourage further efforts to improve evaluation instruments. To reduce disagreement between peer judgments, we propose the revision of the peer review instrumentand the development and implementation of a standardized rater training to improve reliability.