NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2021-3

The Diagnostic Rating System: Rater Behavior for an Alternative Performance Assessment Rating Method
Nnamdi C. Ezike, Allison J. Ames
PDF of the full article


Latent dirichlet analysis to aid equity in the identification for Gifted Education
W. Holmes Finch, Maria E. Hernández Finch, Brian F. French, Claire Braun
PDF of the full article


Structure of pedagogical content knowledge in maths teacher education
Pascal Kilian, Judith Glaesser, Frank Loose, Augustin Kelava
PDF of the full article


A Continuous HYBRID IRT Model for Modeling Changes in Guessing Behavior in Proficiency Tests
Gabriel Nagy, Alexander Robitzsch
PDF of the full article


A Semiparametric Latent Trait Model for Response Times in Tests
Jochen Ranger, Jörg-Tobias Kuhn
PDF of the full article


Estimation of partially observed non-linear terms in a multilevel model
An evaluation of the robustness of ad hoc and state-of-the-art missing data methods

Kristian Kleinke
PDF of the full article

 


 

The Diagnostic Rating System: Rater Behavior for an Alternative Performance Assessment Rating Method

Nnamdi C. Ezike, Allison J. Ames


Abstract
The Diagnostic Rating System (DRS), a novel system for rating performance assessments, purports to reduce rater cognitive load when applying traditional rubrics. This logic-based rating system, developed consistent with expert cognitive processes during performance assessment evaluation, asks a series of explicit questions to essay raters. We applied the DRS approach to an established assessment protocol in the context of ethical reasoning. The fully crossed rating study had 12 raters, each rating 30 student essays. Each rater rated each essay twice: once using a DRS and once with a traditional rubric, with the rating method counter-balanced so that half of the raters used the DRS first and half used the traditional rubric first. Many-facet Rasch measurement equating methods revealed that the raters vary in their severity levels on both rating methods. Overall, the findings suggest that the two rating methods are comparable. Correlation between the estimated examinee proficiency levels on the DRS and rubric was high. Novice and expert raters show high levels of consistency on the rubric, but novice raters were more consistent in ratings on the DRS.

Keywords: Diagnostic Rating System, performance assessment, rubric, raters, many-facet Rasch measurement

 

Corresponding author:
Nnamdi C. Ezike, M.S. Ph.D. 

Candidate in Educational Statistics and Research Methods
University of Arkansas 
257 Graduate Education Building
751 W Maple Street
Fayetteville
AR 72701
ncezike@uark.edu

 


 


Latent dirichlet analysis to aid equity in the identification for Gifted Education 

W. Holmes Finch, Maria E. Hernández Finch, Brian F. French,Claire Braun

Abstract 
The identification of students for participation in gifted education programs has traditionally involved the use of some combination of cognitive ability measures and teacher recommendations. Research has demonstrated that these approaches may limit the participation of students from diverse ethnic and socioeconomic backgrounds. In order to address this issue, efforts have been made to develop assessments and techniques that are more equitable for all students. The current study was focused on the development of a scale based on parental descriptions of as-pects giftedness displayed by their child. Topic-modeling via latent dirichlet analysis was used to extract themes from 13 written responses by parents. The topics were then used in a nonlinear principal components analysis to develop a scale score. Evidence for validity was then inves-tigated for these scores in a variety of ways. Results demonstrated that scores differed between student groups as expected, and correlations with other measures associated with giftedness were in the expected direction and magnitude. Implications and utility of this scale are discussed. 

Keywords: Gifted identification, Topic modeling, Validity assessment, Nonlinear principal components analysis, HOPE scale

 


 

Structure of pedagogical content knowledge in maths teacher education 

Pascal Kilian, Judith Glaesser, Frank Loose and Augustin Kelava

Abstract
A key challenge in maths education research is the identification of content knowledge (CK) and pedagogical content knowledge (PCK). Our study uses German pre-service teachers to investigate these dimensions and to differentiate PCK further into the domains of instructional and diagnostic competence. Empirical results support the existence of these two domains and show that they can be found in a very content related context. A bifactor model is used to illustrate the within-structure of PCK. The model’s validity is discussed referring to different types of students and to relevant validity coefficients of scales of other studies. We discuss implications for the theoretical foundation of the organization of teacher training.

Keywords: mathematics, PCK domains, pedagogical content knowledge, maths teacher education

 

Corresponding author:
Pascal Kilian

Eberhard Karls Universität Tübingen
Methods Center
Hölderlinstr. 29
72074 Tübingen, Germany
pascal.kilian@uni-tuebingen.de

 


 


A Continuous HYBRID IRT Model for Modeling Changes in Guessing Behavior in Proficiency Tests 

Gabriel Nagy, Alexander Robitzsch


Abstract
The results of low-stakes assessments are sensitive to individuals’ persistence in maintaining a constant level of effort and precision over the course of a test. In this paper we present an item response theory (IRT) model that includes test-taking persistence as an additional latent variable. The proposed model is a continuous variant of the HYBRID IRT model. In contrast to Yamomoto’s (1989) HYBRID model, our model allows for nondeterministic changes from solution to guessing behavior. Our model assumes that, over the course of a test, individuals might change their response behavior from solution behavior to random guessing behavior. Individual differences in the turning points are used to assess persistence. Individual differences in persis-tence can be correlated with proficiency, as well as with additional individual-level covariates. The new model is specified as a multilevel mixture IRT model and can be estimated by means of marginal maximum likelihood via the expectation maximization algorithm. The model was scrutinized in a simulation study that showed that the continuous HYBRID model provides good results in a variety of conditions. An empirical application provided further support for the model’s utility because the essence of test-taking persistence was replicated in two test forms assessing science achievement.

Keywords: Item Response Theory, Multilevel Mixture Modeling, Guessing, Persistence, Test Engagement

 

Corresponding author:
Gabriel Nagy

Leibniz Institute for Science and Mathematics Education
Olshausenstraße 62
24118 Kiel, Germany
nagy@ipn.uni-kiel.de

 


 


A Semiparametric Latent Trait Model for Response Times in Tests 

Jochen Ranger, Jörg-Tobias Kuhn

Abstract
In this paper, we propose a latent trait model for the response times in the items of a psychological test. The latent trait model consists of two components. The first component is a model for the marginal response time distributions in the single items. It is based on an approximation of the marginal response time distributions via a spline hazard model. The second component is a factor copula model. It relates the marginal response time distributions to a latent trait that represents the work pace of the respondents. The factor copula model is based on a normal mixture copula. The combination of the spline hazard model with the factor copula model re-sults in a response time model of high flexibility. It subsumes or is able to approximate several well-known models for response times in tests. It can be used as a measurement model in order to estimate the work pace underlying the response times of a respondent. The model can be fitted to data with a two-step maximum likelihood estimator. The performance of the estimator is investigated in a simulation study. We also demonstrate the model’s applicability to a real data set.

Keywords: Latent Trait Model, Response Time Model, Spline Hazard Model, Factor Copula Model

 

Corresponding author:
Jochen Ranger

Martin-Luther-Universität Halle-Wittenberg
Institut für Psychologie
Brandbergweg 23c
06120 Halle (Saale)
jochen.ranger@psych.uni-halle.de

​​​​​​​


 


Estimation of partially observed non-linear terms in a multilevel model 
An evaluation of the robustness of ad hoc and state-of-the-art missing data methods 

Kristian Kleinke

Abstract
Multiple imputation (MI) of outcome variables and linear predictors has received the most attention so far in the MI literature. Research regarding imputation of incomplete non-linear terms (like interaction terms or quadratic or even higher-order relationships) on the other hand is still scarce. The present paper examined two ad hoc MI strategies and two more theoretically sound MI solutions (i.e. substantive model compatible MI) regarding bias in statistical inferences in a random intercept model that includes an interaction term and a quadratic term by means of Monte Carlo simulation. The distribution of predictor variables was either normal, heavy-tailed or skewed. Results show that only if the imputation model is fully compatible to the subsequent analysis model (and to the original data generating process), i.e. only when the imputation model includes non-linear terms as well as information regarding cluster membership, then point estimates and standard errors are unbiased. Currently available MI methods therefore need to be adjusted for situations, where distributional assumptions are violated to some extent.

Keywords: missing data, multiple imputation, multilevel model, incomplete predictors

 

Corresponding author:
Kristian Kleinke

University of Siegen
Institute of Psychology
Adolf-Reichwein-Str. 2a
D-57068 Siegen
Kristian.Kleinke@uni-siegen.de

 


 


Psychological Test and Assessment Modeling
Volume 63 · 2021 · Issue 3

Pabst, 2021
ISSN 2190-0493 (Print)
ISSN 2190-0507 (Internet)

» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften