Dokument-ID Dokumenttyp Verfasser/Autoren Herausgeber Haupttitel Abstract Auflage Verlagsort Verlag Erscheinungsjahr Seitenzahl Schriftenreihe Titel Schriftenreihe Bandzahl ISBN Quelle der Hochschulschrift Konferenzname Quelle:Titel Quelle:Jahrgang Quelle:Heftnummer Quelle:Erste Seite Quelle:Letzte Seite URN DOI Abteilungen OPUS4-24466 Wissenschaftlicher Artikel Endlich, Darius; Richter, Tobias; Marx, Peter; Lenhard, Wolfgang; Moll, Kristina; Witzel, Björn; Schulte-Körne, Gerd Spelling Error Detection : A Valid and Economical Task for Assessing Spelling Skills in Elementary-School Children The ability to spell words correctly is a key competence for educational and professional achievement. Economical procedures are essential to identifying children with spelling problems as early as possible. Given the strong evidence showing that reading and spelling are based on the same orthographic knowledge, error-detection tasks (EDTs) could be considered such an economical procedure. Although EDTs are widely used in English-speaking countries, the few studies in German-speaking countries investigated only pupils in secondary school. The present study investigated N = 1,513 children in elementary school. We predicted spelling competencies (measured by dictation or gap-fill dictation) based on an EDT via linear regression. Error-detection abilities significantly predicted spelling competencies (R² between .509 and .679), indicating a strong connection. Predictive values in identifying children with poor spelling abilities with an EDT proved to be sufficient. Error detection for the assessment of spelling skills is therefore a valid instrument for transparent languages as well. 2020 25-40 Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie 52 Fehleridentifikation: Ein valides und ökonomisches Verfahren zur Erfassung von Rechtschreibkompetenzen in der Grundschule 1-2 urn:nbn:de:bvb:20-opus-244665 10.1026/0049-8637/a000227 Institut für Psychologie OPUS4-28414 Wissenschaftlicher Artikel Gary, Sebastian; Lenhard, Wolfgang; Lenhard, Alexandra Modelling norm scores with the cNORM package in R In this article, we explain and demonstrate how to model norm scores with the cNORM package in R. This package is designed specifically to determine norm scores when the latent ability to be measured covaries with age or other explanatory variables such as grade level. The mathematical method used in this package draws on polynomial regression to model a three-dimensional hyperplane that smoothly and continuously captures the relation between raw scores, norm scores and the explanatory variable. By doing so, it overcomes the typical problems of classical norming methods, such as overly large age intervals, missing norm scores, large amounts of sampling error in the subsamples or huge requirements with regard to the sample size. After a brief introduction to the mathematics of the model, we describe the individual methods of the package. We close the article with a practical example using data from a real reading comprehension test. 2021 20 Psych 3 3 501 521 urn:nbn:de:bvb:20-opus-284143 10.3390/psych3030033 Institut für Psychologie OPUS4-25804 Wissenschaftlicher Artikel Kirschmann, Nicole; Lenhard, Wolfgang; Suggate, Sebastian Influences from working memory, word and sentence reading on passage comprehension and teacher ratings Reading fluency is a major determinant of reading comprehension but depends on moderating factors such as auditory working memory (AWM), word recognition and sentence reading skills. We investigated how word and sentence reading skills relate to reading comprehension differentially across the first 6 years of schooling and tested which reading variable best predicted teacher judgements. We conducted our research in a rather transparent language, namely, German, drawing on two different data sets. The first was derived from the normative sample of a reading comprehension test (ELFE-II), including 2056 first to sixth graders with readings tests at the word, sentence and text level. The second sample included 114 students from second to fourth grade. The latter completed a series of tests that measured word and sentence reading fluency, pseudoword reading, AWM, reading comprehension, self-concept and teacher ratings. We analysed the data via hierarchical regression analyses to predict reading comprehension and teacher judgements. The impact of reading fluency was strongest in second and third grade, afterwards superseded by sentence comprehension. AWM significantly contributed to reading comprehension independently of reading fluency, whereas basic decoding skills disappeared after considering fluency. Students' AWM and reading comprehension predicted teacher judgements on reading fluency. Reading comprehension judgements depended both on the students' self-concept and reading comprehension. Our results underline that the role of word reading accuracy for reading comprehension quickly diminishes during elementary school and that teachers base their assessments mainly on the current reading comprehension skill. 2021 817–836 Journal of Research in Reading 44 4 urn:nbn:de:bvb:20-opus-258043 10.1111/1467-9817.12373 Institut für Psychologie OPUS4-20048 Wissenschaftlicher Artikel Lenhard, Alexandra; Lenhard, Wolfgang; Gary, Sebastian Continuous norming of psychometric tests: A simulation study of parametric and semi-parametric approaches Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals. 2019 e0222279 PLoS ONE 14 9 urn:nbn:de:bvb:20-opus-200480 10.1371/journal.pone.0222279 Institut für Psychologie OPUS4-2397 Bericht Lenhard, Wolfgang Bridging the Gap to Natural Language: A Review on Intelligent Tutoring Systems based on Latent Semantic Analysis One of the major drawbacks in the implementation of intelligent tutoring systems is the limited capacity to process natural language and to automatically deal with unexpected or unknown vocabulary. Latent Semantic Analysis (LSA) is a statistical technique of automatic language processing, which can attenuate the "language barrier" between humans and tutoring systems. LSA-based intelligent tutoring systems address the goals of modelling human tutoring dialogues (AutoTutor), enhancing text comprehension and summarisation skills (State-The-Essence, Summary Street®, conText, Apex), training of comprehension strategies (iStart, a French system in development) and improving story and essay writing (Write To Learn, Select-a-Kibitzer, StoryStation). The systems are reviewed concerning their efficacy in modelling skilled human tutors and regarding their effects on the learner. 2008 urn:nbn:de:bvb:20-opus-27980 Institut für Psychologie