NEWSBÜCHERJOURNALESHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...


2017-4

A Primer on Relative Importance Analysis: Illustrations of its utility for psychological research
Matthias Stadler, Helena D. Cooper-Thomas & Samuel Greiff
Abstract | Startet den Datei-DownloadPDF of the full article

Deterioration and recovery in verbal recall: repetition helps against pro-active interference
Chris Lange-Kuettner, Monika Markowska & Ridhi Kochhar
Abstract | Startet den Datei-DownloadPDF of the full article


Special Issue:
Rater effects: Advances in item response modeling of human ratings - Part I

Guest editor: Thomas Eckes

Guest Editorial
Thomas Eckes
Startet den Datei-DownloadPDF of the full article

Some IRT-based analyses for interpreting rater effects
Margaret Wu
Abstract | Startet den Datei-DownloadPDF of the full article

The impact of design decisions on measurement accuracy demonstrated using the Hierarchical Rater Model
Jodi M. Casabianca & Edward W. Wolfe
Abstract | Startet den Datei-DownloadPDF of the full article

Exploring rater errors and systematic biases using adjacent-categories Mokken models
Stefanie A. Wind & George Engelhard, Jr.
Abstract | Startet den Datei-DownloadPDF of the full article

 


A primer on relative importance analysis: illustrations of its utility for psychological research
Matthias Stadler, Helena D. Cooper-Thomas & Samuel Greiff

Abstract

In this primer we present a hands-on introduction to relative importance analysis as a way of exploring the relative importance of predictors in regression analysis. This method is particularly useful when predictors are correlated since it deals with issues of multicollinearity. We outline the benefits of two major approaches to relative importance, relative weights and dominance analyses, by contrasting these two relative importance analyses with correlations and multiple regressions. Based on two already published examples, we illustrate how relative importance analysis can be used to augment the interpretation of results and when relative weights importance is most appropriate. Finally, we discuss the advantages as well as the limitations of relative importance analysis on a more theoretical level. Our aim throughout is to present these analytical methods in a simple way that makes them accessible to a broad audience.

Key words: relative importance, relative weights, dominance analysis, regression, research methods


Matthias Stadler, ECCS
University of Luxembourg
Maison des Sciences Humaines
11 Porte des Sciences à Esch-Belval
L-4366, Luxembourg
matthias.stadler@uni.lu

top


Deterioration and recovery in verbal recall: Repetition helps against pro-active interference
Chris Lange-Kuettner, Monika Markowska & Ridhi Kochhar

Abstract
The current study tests whether memory deterioration due to pro-active interference (PI) in verbal recall could be halted via block repetition potentially leading to an increased memory consolidation. We also tested whether bilinguals would be better shielded against memory deterioration than monolinguals because they constantly need to enrich their vocabulary to compensate for their smaller lexica in either language. We tested monolinguals and balanced bilinguals with an N-Back and a free verbal recall task. Repetition showed a significant main effect with a large effect size. In Study 1 (N=45), monolingual men showed less improvement in the repetition blocks, while bilingual men showed a significant doubling of their word recall on each repetition. In Study 2 (N=78), monolingual women were less likely to use the repetition opportunity to improve the word score. Thus, in both studies, a significant monolingual disadvantage showed. When the two data sets were merged (N=123), statistical effects showed that the single word list repetition had successfully and significantly increased resistance to PI, but all individual differences due to bilingualism and sex had disappeared. This supported a previous meta-analysis showing that a monolingual disadvantage does not hold in large samples with N > 100 (Paap effect).

Key words: memory deterioration, bilingualism, free verbal recall, proactive inhibition, rehearsal, memory consolidation


Chris Lange-Kuettner, PhD
London Metropolitan University
Tower Building T6-20
166-220 Holloway Road
London N7 8DB, United Kingdom
c.langekuettner@londonmet.ac.uk

top


Some IRT-based analyses for interpreting rater effects
Margaret Wu

Abstract

In this paper, we present a few IRT-based analyses of rater effects including an examination of rater severity and rater discrimination. Rater severity refers to the differences between raters in terms of their tendencies to award higher or lower scores. Rater discrimination refers to the extent to which raters use the score range to separate students on the ability scale. Methodologies to estimate rater severity and rater discrimination are presented. A discussion on the interpretations of some measures of rater effect is provided. We highlight that a rater who shows large discrepancies from other raters may in fact be the best rater.

Keywords: rater severity, central tendency, rater discrimination


Margaret Wu, PhD
Assessment Research Centre
Melbourne Graduate School of Education
The University of Melbourne
Victoria 3010, Australia
wu@edmeasurement.com.au

top


The impact of design decisions on measurement accuracy demonstrated using the Hierarchical Rater Model
Jodi M. Casabianca & Edward W. Wolfe

Abstract

When humans assign ratings in testing contexts, concern arises about whether rater effects impact the accuracy of the resulting measures. Those who lead scoring efforts implement several activities and utilize various designs to minimize the impact of these rater errors. This article uses the Hierarchical Rater Model (HRM) to demonstrate how the magnitude of rater errors and numbers of ratings associated with various measurement facets (e.g., raters & items) impact the accuracy of measures. Additionally, we demonstrate how the level at which decisions are made about the measures (e.g., test taker item scores, test taker total scores, test taker classifications) impact measurement accuracy.

Keywords: rater effects, measurement accuracy, hierarchical rater model, rating designs


Jodi M. Casabianca, PhD
Research Scientist
Educational Testing Service
660 Rosedale Road, MS T-03
Princeton, NJ 08541, USA
jcasabianca@ets.org

top


Exploring rater errors and systematic biases using adjacent-categories Mokken models
Stefanie A. Wind & George Engelhard, Jr.

Abstract

Adjacent-categories formulations of polytomous Mokken Scale Analysis (ac-MSA) offer insight into rating quality in the context of educational performance assessments, including information regarding individual raters’ use of rating scale categories and the degree to which student performances are ordered in the same way across raters. However, the degree to which ac-MSA indicators of rating quality correspond to specific types of rater errors and systematic biases, such as severity/leniency and response sets, has not been fully explored. The purpose of this study is to explore the degree to which ac-MSA provides diagnostic information related to rater errors and systematic biases in the context of educational performance assessments. Data from a rater-mediated writing assessment are used to explore the sensitivity of ac-MSA indices to two categories of rater errors and systematic biases: (1) rater leniency/severity; and (2) response sets (e.g., centrality). Implications are discussed in terms of research and practice related to large-scale educational performance assessments.

Keywords: Mokken scaling; rater errors; Rasch measurement theory


Stefanie A. Wind, PhD
Educational Studies in Psychology
Research Methodology, and Counseling
The University of Alabama
313C Carmichael Hall, USA
swind@ua.edu

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext