NEWSBÜCHERJOURNALESHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...


2019-1

Editorial:
Focus on new research topics in Psychological Test and Assessment Modeling
Klaus D. Kubinger
PDF of the full article

Quality control of psychological services
Gerhard Stemmler & Birgit Spinath
PDF of the full article

The impacts of characteristics of disconnected subsets on group anchoring in incomplete rater-mediated assessment networks
Stefanie A. Wind & Catanya G. Stager
PDF of the full article

Longitudinal linking of Rasch-model-scaled competence tests in large-scale assessments: A comparison and evaluation of different linking methods and anchoring designs based on two tests on mathematical competence administered in grades 5 and 7
Luise Fischer, Timo Gnambs, Theresa Rohm & Claus H. Carstensen
PDF of the full article

Item parameter recovery for the two-parameter testlet model with different estimation methods
Yong Luo & Melissa Gordon Wolf
PDF of the full article

A specialized confirmatory mixture IRT modeling approach for multidimensional tests
Minjeong Jeon
PDF of the full article
 


Quality control of psychological services
Gerhard Stemmler  & Birgit Spinath 

Abstract

The German Psychological Society (DGPs) has founded a Center for Scientific-Psychological Services. Its task is to further quality control of ethical standards in research applications, of psychological assessment tasks or of curricular standards in psychology education. At the time, the Center supports the Ethics Committee of the DGPs, the postgraduate training in forensic psychology, the quality seal of psychology majors, the EuroPsy certification, and the quality control of psychological testing for driving licences.

Keywords: German Psychological Society, psychological services, quality control


Prof. Dr. Gerhard Stemmler
TransMIT Geschäftsstelle
Haselbusch 4A
32805 Horn-Bad Meinberg, Germany

 


The impacts of characteristics of disconnected subsets on group anchoring in incomplete rater-mediated assessment networks
Stefanie A. Wind  & Catanya G. Stager 

Abstract

In operational administrations of rater-mediated performance assessments, practical constraints often result in incomplete data collection designs, in which each rater does not rate each performance on each task. Unless the data collection design includes systematic links, such as raters scoring a subset of the same test-takers as other raters, it is not possible to compare test-takers, raters, and tasks between whom there are no connections. In practice, many operational assessments include these disconnected subsets of assessment components – thereby limiting the comparisons that can be made between test-takers, raters, and tasks. However, when researchers use the Rasch model, they can apply group-anchoring techniques through which they can make comparisons across disconnected subsets. Although researchers and practitioners regularly use group anchoring, there has been limited methodological research related to this technique. In this study, we used simulated data to examine the impact of characteristics of disconnected subsets when group anchoring is used. Our results suggested that the characteristics of disconnected subsets impact the ordering and precision of test-taker estimates, particularly with regard to rating designs and model-data fit within disconnected subsets. We discuss the implications of our findings for research and practice related to rater-mediated assessments. 

Keywords: group anchoring; sparse networks; rating designs; performance assessment; Rasch model


Stefanie A. Wind, PhD
Assistant Professor of Educational Measurement
The University of Alabama
Department of Educational Research Methodology, 
Box 870231
Tuscaloosa, AL 35487, USA

 


Longitudinal linking of Rasch-model-scaled competence tests in large-scale assessments: A comparison and evaluation of different linking methods and anchoring designs based on two tests on mathematical competence administered in grades 5 and 7
Luise Fischer, Timo Gnambs, Theresa Rohm & Claus H. Carstensen

Abstract

Measuring growth in an item response theory framework requires aligning two tests on a common scale known as longitudinal linking. So far, no consensus exists regarding the appropriate method for the linking of longitudinal data scaled according to the Rasch model in large-scale assessments. Therefore, an empirical study was conducted within the German National Educational Panel Study to identify appropriate linking methods for the comparison of competencies across time. The study examined two anchoring designs based either on anchor-items or an anchor-group and three linking methods (mean/mean linking, fixed parameters calibration, and concurrent calibration). Two tests on mathematical competence were administered to a sample of n = 3,833 German students (48 % girls) in Grades 5 and 7. An independent link sample (n = 581, 53 % girls) drawn from the same population was administered both tests at the same time. The assumptions of unidimensionality were confirmed; differential item functioning was examined using effect-based hypotheses tests. Anchoring designs and linking methods were compared and evaluated using diverse criteria such as link error, mean growth rate estimation, and model fit. Overall, little differences among the linking methods and anchoring designs were found. However, mean growth was found to be significantly smaller in the anchor-group design.

Keywords: linking, item response theory, longitudinal, effect based hypotheses testing, competences


Luise Fischer
Educational Measurement
Leibniz Institute for Educational Trajectories
Wilhelmsplatz 3
96047 Bamberg, Germany

 


Item parameter recovery for the two-parameter testlet model with different estimation methods
Yong Luo  & Melissa Gordon Wolf 

Abstract

A simulation study was conducted to investigate how MCMC, MMLE, and WLSMV, all implemented in Mplus, recovered item parameters and testlet variance parameter of the two-parameter logistic (2PL) testlet model. The manipulated factors included sample size and testlet variance magnitude, and parameter recoveries were evaluated with bias, standard error, root mean square error, and relative bias. We found that there were no statistically significant differences regarding parameter recovery between the three estimation methods. When both sample size and magnitude of testlet variance were small, both WLSMV and MCMC had convergence issues, which did not occur to MCMC regardless of sample size and testlet variance.  A real dataset from a high-stakes test was used to demonstrate the estimation of the 2PL testlet model with the three methods.

Keywords: testlet model, estimation, MCMC, MMLE, WLSMV


Yong Luo, Phd
National Center for Assessment
West Palm Neighborhood
King Khalid Road
Riyadh 11534, Saudi Arabia

 


A specialized confirmatory mixture IRT modeling approach for multidimensional tests
Minjeong Jeon

Abstract

Finite-mixture models are typically utilized in educational and psychological research to explore potential latent classes that may be present in the data under investigation. However, mixture models can also be applied to test out or confirm researchers’ theories or hypotheses about latent classes. In this paper, we discuss a specialized confirmatory mixture IRT modeling approach for multidimensional tests with a set of pre-arranged constraints on item parameters that are devised to differentiate latent classes. Two types of multidimensional classification scenarios are discussed: (1) a single membership case where subjects strictly have one latent class membership for all test dimensions, and (2) a mixed membership case where subjects are allowed to have different latent class memberships across test dimensions. We illustrate maximum likelihood estimation of the two types of confirmatory mixture models with an empirical dataset.

Keywords:  Mixture IRT modeling, confirmatory approach, multiple dimensions, single membership, mixed membership, saltus modeling


Minjeong Jeon, PhD
Department of Education
University of California
Los Angeles, 3141 Moore Hall
457 Portola Avenue
Los Angeles CA, 90024, USA

 



Psychological Test and Assessment Modeling
Volume 61 · 2019 · Issue 1

Pabst, 2019
ISSN 2190-0493 (Print)
ISSN 2190-0507 (Internet)

» Psychological Test and Assessment Modeling Online-Shop...





alttext