NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: NEWS » Aktuelle News Psychologie » News lesen

« zurück

Psychological Test and Assessment Modeling: Advances in Rasch modeling - New applications and directions

In the US, various stakeholders including courts and states have adopted the Rasch model, in part, because it leads to logical and transparent results. With the Rasch model all items are weighted equally in order to define the ability that is to be measured. Four papers in the new issue of "Psychological Test and Assessment" consider the Rasch model from very different angles.

The first paper from Hagquist and Andrich shows how the interpretation of parameter estimates is not always as clear as one might assume. The case of artificial differential item functioning (DIF) proves that sometimes parameter estimates vary as a result of purely statistical artifacts. In this case the observed effects depend, for example, on the distribution of the item difficulties rather than actually existing DIF due to gender. We think a general method for identifying such statistical artifacts is essential in order to avoid possible misinterpretations of DIF and possibly other item-related parameters such as difficulty, discrimination, and "guessing.”
 
The second paper from Salzberger also considers the estimation of the difficulty parameters, here however for rating scale items. Salzberger describes different approaches to analyze the ordering of the thresholds of a rating scale item (in which the thresholds represent the difficulties of the different categories of a rating scale item) and suggests an additional approach in order to verify that the observed ordering is in accordance with the theory of the construct that is to be measured. In our opinion the presented approaches are an important means to check the construct validity of items and therefore of the test. An increasingly important issue in the last few years, validity theory still offers few practical guidelines on how to check test validity. Salzberger’s approach might prove a valuable contribution in this regard.
 
In the third paper, Torres, Diakow, Freund, & Wilson  propose a new model, the Latent Class Level Partial Credit Model (Latent Class L-PCM). The presented model supports identifying and interpreting latent classes of respondents according to empirically estimated performance levels. In educational assessment, there is an increasing desire to document a student’s performance not only quantitatively, as a scale score or ranking, but qualitatively, as a description corresponding to a specific level of performance. We think that the Latent Class L-PCM can be quite useful in helping experts to identify actually existing performance levels and to interpret them properly, using the actual performances of the students at each performance level.
 
The final paper from Wind  presents a method based on Mokken scaling that supports examining data in terms of the basic requirements for invariant measurement, which is assumed in the Rasch model. We included this paper into this special topic for two reasons: (1) we think it is essential for any application of a model to validate its compliance with the model’s basic assumptions, "invariance” in the case of the Rasch model; and (2) we feel that this paper illustrates the importance of looking "outside of the box” - in this case the box of parametric measurement. As in the real world, it is sometimes only possible to judge certain characteristics from an outside perspective.

zum Journal




alttext    

 

Aktuell

Socials

Fachzeitschriften