Refine
Has Fulltext
- yes (2)
Is part of the Bibliography
- yes (2) (remove)
Year of publication
- 2019 (2) (remove)
Document Type
- Journal article (2)
Language
- English (2) (remove)
Keywords
- age groups (1)
- language development (1)
- language intervention (1)
- normal distribution (1)
- polynomials (1)
- preschool (1)
- psychometrics (1)
- shared reading (1)
- simulation and modeling (1)
- skewness (1)
Institute
Positive effects of shared reading for children’s language development are boosted by including instruction of word meanings and by increasing interactivity. The effects of engaging children as storytellers on vocabulary development have been less well studied. We developed an approach termed Interactive Elaborative Storytelling (IES), which employs both word-learning techniques and children’s storytelling in a shared-reading setting. To systematically investigate potential benefits of children as storytellers, we contrasted this approach to two experimental groups, an Elaborative Storytelling group employing word-learning techniques but no storytelling by children and a Read-Aloud group, excluding any additional techniques. The study was a 3 × 2 pre-posttest randomized design with 126 preschoolers spanning 1 week. Measured outcomes were receptive and expressive target vocabulary, story memory, and children’s behavior during story sessions. All three experimental groups made comparable gains on target words from pre- to posttest and there was no difference between groups in story memory. However, in the Elaborative Storytelling group, children were the least restless. Findings are discussed in terms of their contribution to optimizing shared reading as a method of fostering language.
Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals.