NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2015-4

The hybrid rating method: Assessment of a novel way to measure attitudes
Johann-Christoph Münscher
Abstract | Startet den Datei-DownloadPDF of the full article


Special Topic:
Missing values in large-scale assessment studies
Guest editors: Steffi Pohl & Christian Aßmann


Guest editorial
Steffi Pohl & Christian Aßmann
Startet den Datei-DownloadPDF of the full article

Commonalities and differences in IRT-based methods for nonignorable item-nonresponses
Norman Rose, Matthias von Davier & Benjamin Nagengast
Abstract | Startet den Datei-DownloadPDF of the full article

Investigating mechanisms for missing responses in competence tests
Carmen Köhler, Steffi Pohl & Claus H. Carstensen
Abstract | Startet den Datei-DownloadPDF of the full article

Nonignorable data in IRT models: Polytomous responses and response propensity models with covariates
C. A. W. Glas, J. L. Pimentel & S. M. A. Lamers
Abstract | Startet den Datei-DownloadPDF of the full article

Multiple imputation of missing categorical data using latent class models: State of the art
Davide Vidotto, Maurits C. Kaptein & Jeroen K. Vermunt
Abstract | Startet den Datei-DownloadPDF of the full article

Partitioned predictive mean matching as a multilevel imputation technique
Gerko Vink, Goran Lazendi & Stef van Buuren
Abstract | Startet den Datei-DownloadPDF of the full article

Bayesian estimation in IRT models with missing values in background variables
Christian Aßmann, Christoph Gaasch, Steffi Pohl & Claus H. Carstensen
Abstract | Startet den Datei-DownloadPDF of the full article

 


The hybrid rating method: Assessment of a novel way to measure attitudes
Johann-Christoph Münscher

Abstract

In reaction to shortcomings with usual methods of measuring attitudes - mostly the five-point rating method - this paper presents the hybrid rating method as a novel approach to measuring attitudes. The hybrid rating method is characterised by its open ended design in which respondents create their own rating scale to fit their individual style. In this approach the central assumption was that respondents could express their attitudes better and the method therefore provides more and better information. The method is described in detail and procedures for scoring are presented. A comparison between the hybrid method and the five-point method, using an independent measure two-sample design (α; = .05; β = .1), highlights the aspects: Information quantity, information quality, and pragmatic value. In both conditions 40 respondents answered the NEO-PI-R with either the five-point method or the hybrid method and rated the process in an additional questionnaire. Analysis showed that the hybrid method yielded more information than the five-point method (t = 2.823, p = 0.003) while potentially allowing additional information to be gathered from the individuals rating style. Psychometric quality was found to be close to identical between the two methods (both methods show an average Cronbach's α of .88) and respondents rated the two methods similarly. The hybrid method seems capable of delivering individualised data without sacrificing psychometric quality. The method was observed to be more demanding, both in time and effort, and therefore is restricted in its application.

Keywords: rating scale, attitudes, measurement, questionnaire, psychometrics


Johann-Christoph Münscher
Jürgensallee 37
22609 Hamburg, Germany
Johann_Muenscher@yahoo.de

top


Commonalities and differences in IRT-based methods for nonignorable item nonresponses
Norman Rose, Matthias von Davier & Benjamin Nagengast

Abstract

Missing responses resulting from omitted or not-reached items are beyond researchers’ control and potentially threaten the validity of test results. Empirical evidence concerning the relationship between missingness and test takers’ performance on the test have suggested that the missing data mechanism is nonignorable and needs to be taken into account. Various IRT-based models for nonignorable item nonresponses have been proposed (Glas & Pimentel, 2008; Holman & Glas, 2005; Korobko, Glas, Bosker, & Luyten, 2008; Moustaki & Knott, 2000; O’Muircheartaigh & Moustaki, 1999; Rose, 2013; Rose, von Davier, & Xu, 2010). In this article, we adopted Rubin’s (1976) definitions of missing data mechanisms for educational and psychological measurement and consider the implications for maximum likelihood (ML) estimation in IRT models for incomplete data. Next, we derived multidimensional IRT models for nonignorable item nonresponses. Further, we investigated latent regression models and multiple group IRT models for nonignorable missing responses and compared to multidimensional IRT models. Although these models have a great deal in common, there are important distinctions in the underlying assumptions and restrictions; these have critical implications with respect to their use in real applications. Then, we provided additional insight on how models for nonignorable item nonresponses adjust for missing responses. Finally, we offered a list of guiding questions, which support the choice of appropriate models in concrete applications.

Keywords: Nonignorable, item nonresponses, omitted and not-reached items, IRT


Norman Rose, PhD
Hector Research Institute of Education Sciences and Psychology
Europastr. 6
72072 Tübingen, Germany
norman.rose@uni-tuebingen.de

top


Investigating mechanisms for missing responses in competence tests
Carmen Köhler , Steffi Pohl  & Claus H. Carstensen

Abstract

Examinees working on competence tests frequently leave questions unanswered. As the missing values usually violate the missing at random condition, they pose a threat to drawing correct inferences about person abilities. In order to account appropriately for missing responses in the scaling of competence data, the mechanism resulting in missing responses needs to be modeled adequately. So far, studies have mainly focused on the evaluation of different approaches accounting for missing responses, making assumptions about the underlying missing mechanism. A deeper understanding of how and why missing responses occur can provide valuable information on the appropriateness of these assumptions. In the current study we investigate whether the missing tendency of a person depends on the competence domain assessed, or whether it can be considered a rather person-specific trait. Furthermore, we examine how missing responses relate to ability and other personality variables. We conduct our analyses separately for not-reached and omitted items, using data from the National Educational Panel Study (NEPS). Based on an IRT approach by Holman and Glas (2005), we investigate the missing process in the competence domains information and communication technologies, science, mathematics, and reading, which were assessed in three age cohorts (fifth-graders: N = 5,193, ninth-graders: N = 15,396, adults: N = 7,256). Results demonstrate that persons’ missing propensities may, to some extent, be regarded as person-specific. The occurrence of omissions and not-reached items mainly depends on persons’ competencies, and is different for people with a migration background and for students attending different school types, even after controlling for competencies. Our findings should be considered in approaches aiming at accounting for missing responses in the scaling competence data.

Keywords: missing data, missing propensity, Item Response Theory, scaling competencies, large-scale assessment


Carmen Köhler, PhD
Otto-Friedrich-University Bamberg
Wilhelmsplatz 3
96047 Bamberg, Germany
carmen.koehler@uni-bamberg.de

top


Nonignorable data in IRT models: Polytomous responses and response propensity models with covariates
C. A. W. Glas , J. L. Pimentel  & S. M. A. Lamers

Abstract

Missing data usually present special problems for statistical analyses, especially when the data are not missing at random, that is, when the ignorability principle defined by Rubin (1976) does not hold. Recently, a substantial number of articles have been published on model-based procedures to handle nonignorable missing data due to item nonresponse (Holman & Glas, 2005; Glas & Pimentel, 2008; Rose, von Davier & Xu, 2010; Pohl, Gräfe & Rose, 2014). In this approach, an item response theory (IRT) model for the observed data is estimated concurrently with an IRT model for the propensity of the missing data.
The present article elaborates on this approach in two directions. Firstly, the preceding articles only consider dichotomously scored items; in the present article it is shown that the approach equally works for polytomously scored items. Secondly, it is shown that the methods can be generalized to allow for covariates in the model for the missing data. Simulation studies are presented to illustrate the efficiency of the proposed methods.

Keywords: item response theory, latent traits, missing data, nonignorable missing data, observed covariates


C. A. W. Glas, PhD
University of Twente
Cubicus C338, The Netherlands
c.a.w.glas@utwente.nl

top


Multiple Imputation of Missing Categorical Data using Latent Class Models: State of the Art
Davide Vidotto, Jeroen K. Vermunt & Maurits C. Kaptein

Abstract

This paper provides an overview of recent proposals for using latent class models for the multiple imputation of missing categorical data in large-scale studies. While latent class (or finite mixture) modeling is mainly known as a clustering tool, it can also be used for density estimation, i.e., to get a good description of the lower- and higher-order associations among the variables in a dataset. For multiple imputation, the latter aspect is essential in order to be able to draw meaningful imputing values from the conditional distribution of the missing data given the observed data.
We explain the general logic underlying the use of latent class analysis for multiple imputation. Moreover, we present several variants developed within either a frequentist or a Bayesian framework, each of which overcomes certain limitations of the standard implementation. The different approaches are illustrated and compared using a real-data psychological assessment application.

Keywords: latent class models, missing data, mixture models, multiple imputation


Davide Vidotto
Department of Methodology and Statistics
Tilburg University, PO Box 90153
5000 LE Tilburg, The Netherlands
d.vidotto@uvt.nl

top


Partitioned predictive mean matching as a multilevel imputation technique
Gerko Vink, Goran Lazendic & Stef van Buuren

Abstract

Large scale assessment data often has a multilevel structure. When dealing with missing values, such structures need to be taken into account to prevent underestimation of the intraclass correlation. We evaluate predictive mean matching (PMM) as a multilevel imputation technique and compare it to other imputation approaches for multilevel data. We propose partitioned predictive mean matching (PPMM) as an extension to the PMM algorithm to divide the big data multilevel problem into manageable parts that can be solved by standard predictive mean matching. We show that PPMM can be a very effective imputation approach for large multilevel datasets and that both PPMM and PMM yield plausible inference for continuous, ordered categorical, or even dichotomous multilevel data. We conclude that both the performance of PMM and PPMM is often comparable to dedicated methods for multilevel data.

Keywords: Large datasets, Multilevel data, Multiple imputation, Partitioning, Predictive mean matching


Dr. Gerko Vink
Department of Methods and Statistics
Utrecht University
Padualaan 14
3584CH Utrecht, the Netherlands
G.Vink@uu.nl

top


Bayesian estimation in IRT models with missing values in background variables
Christian Aßmann, Christoph Gaasch, Steffi Pohl  & Claus H. Carstensen

Abstract

Large scale assessment studies typically aim at investigating the relationship between persons competencies and explaining variables. Individual competencies are often estimated by explicitly including explaining background variables into corresponding Item Response Theory models. Since missing values in background variables inevitably occur, strategies to handle the uncertainty related to missing values in parameter estimation are required. We propose to adapt a Bayesian estimation strategy based on Markov Chain Monte Carlo techniques. Sampling from the posterior distribution of parameters is thereby enriched by sampling from the full conditional distribution of the missing values. We consider non-parametric as well as parametric approximations for the full conditional distributions of missing values, thus allowing for a flexible incorporation of metric as well as categorical background variables. We evaluate the validity of our approach with respect to statistical accuracy by a simulation study controlling the missing values generating mechanism. We show that the proposed Bayesian strategy allows for effective comparison of nested model specifications via gauging highest posterior density intervals of all involved model parameters. An illustration of the suggested approach uses data from the National Educational Panel Study on mathematical competencies of fifth grade students.

Keywords: missing values, background variables, classification and regression trees, item response theory


Dr. Christian Aßmann
Otto-Friedrich-University Bamberg
Feldkirchenstr. 21
96045 Bamberg, Germany
christian.assmann@uni-bamberg.de

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften