NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2014-1

Psychological Test and Assessment Modeling, Volume 56, 2014 (1)

Who is motivated to volunteer? A latent profile analysis linking volunteer motivation to frequency of volunteering
Christian Geiser, Morris A. Okun & Caterina Grano
Abstract | Startet den Datei-DownloadPDF of the full article

The impact of group pseudo-guessing parameter differences on the detection of uniform and nonuniform DIF
W. Holmes Finch & Brian F. French
Abstract | Startet den Datei-DownloadPDF of the full article

On the ways of investigating the discriminant validity of a scale in giving special emphasis to estimation problems when investigating multitrait-multimethod matrices
Karl Schweizer
Abstract | Startet den Datei-DownloadPDF of the full article

Establishing the construct validity of conversational C-Tests using a multidimensional Rasch model
Purya Baghaei & Rüdiger Grotjahn
Abstract | Startet den Datei-DownloadPDF of the full article

The systematic variation of task characteristics facilitates the understanding of task difficulty: A cognitive diagnostic modeling approach to complex problem solving
Samuel Greiff, Katarina Krkovic & Gabriel Nagy
Abstract | Startet den Datei-DownloadPDF of the full article

Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis
Russell T. Warne & Ross Larsen
Abstract | Startet den Datei-DownloadPDF of the full article

 


Who is motivated to volunteer? A latent profile analysis linking volunteer motivation to frequency of volunteering
Christian Geiser, Morris A. Okun & Caterina Grano

Abstract

There has been considerable interest in identifying the motives that spur people to volunteer. We used a person-centered approach - latent profile analysis - to examine the relationship between intrinsic and extrinsic volunteer motivation and frequency of volunteering in American (N = 589) and Italian (N = 993) college students. Six latent motivation classes were distinguished: Low Intrinsic-Low Extrinsic, Medium Intrinsic-Low Extrinsic, High Intrinsic-Low Extrinsic, High Intrinsic-High Extrinsic, High Amotivation, and a Response Set class. Students in the High Intrinsic-High Extrinsic class volunteered less frequently than students in the High Intrinsic-Low Extrinsic class, suggesting that external incentives may undermine an individual’s intrinsic motivation to volunteer. Although males were more prevalent in the High Amotivation class, gender differences in self-reported volunteering frequency were not found. Italian students reported volunteering less frequently overall and were more prevalent in the High Amotivation class.

Key words: volunteer motivation; intrinsic and extrinsic motivation; frequency of volunteering; sex differences; latent profile analysis


Christian Geiser, PhD
Department of Psychology
2810 Old Main Hill
Logan, UT 84322-2810, Utah, USA
christian.geiser@usu.edu

top


The impact of group pseudo-guessing parameter differences on the detection of uniform and nonuniform DIF
W. Holmes Finch  & Brian F. French

Abstract

Differential item functioning (DIF) is an important aspect of item development and validity assessment. Traditionally DIF is divided into two broad types, focusing on conditional group differences of the item difficulty (uniform DIF) and discrimination (nonuniform DIF) parameters. Relatively little attention has been given to group differences on the probability of answering an item correctly due to chance. The goal of this study was to investigate the influence of such group differences on the detection of uniform and nonuniform DIF, and on the accuracy of the estimation of item difficulty and discrimination parameters. Results demonstrate that when groups differed on the pseudo-guessing parameter in a 3 parameter item response theory model, Type I error rates for both uniform and nonuniform DIF were elevated, and that these differences appear to be due to parameter estimation bias for both item difficulty and discrimination. Implications of these results are discussed.

Key words: Differential Item Functioning, Guessing, Type I Error, Validity, Item Response Theory, 3 parameter logistic model


Holmes Finch, PhD
Department of Educational Psychology
TC 521 Ball State University
Muncie, IN 47306, USA

Brian F. French, PhD
Department of Educational Leadership and Counseling Psychology
Cleveland Hall
Washington State University
Pullman
Washington, 99164, USA
frenchb@wsu.edu

top


On the ways of investigating the discriminant validity of a scale in giving special emphasis to estimation problems when investigating multitrait-multimethod matrices
Karl Schweizer

Abstract

Discriminant validity is a valuable property of psychological scales that is usually investigated in the framework of the multitrait-multimethod approach. The establishment of discriminant validity demands the demonstration that the scale of interest representing a specific construct is unrelated to scales representing other constructs. The original implementation of the multitrait-multimethod approach demands a large number of comparisons among the correlations of a multitrait-multimethod design. More recently discriminant validity is investigated by means of confirmatory factor models including latent variables for the representation of constructs and methods. The process of arriving at a decision concerning discriminant validity in investigating multitrait-multimethod data is described. Downsizing the complexity of the model and the usage of a ridge option are proposed and applied for overcoming the estimation problems that frequently obstruct confirmatory factor analysis of multitrait-multimethod data.

Key words: disciminant validity, construct validity, construct, confirmatory factor analysis, multitrait-multimethod matrix


Karl Schweizer, PhD
Department of Psychology
Goethe University Frankfurt
Grüneburgplatz 1
60323 Frankfurt a. M., Germany
K.Schweizer@psych.uni-frankfurt.de

top


Establishing the construct validity of conversational C-Tests using a multidimensional Rasch model
Purya Baghaei & Rüdiger Grotjahn

Abstract

C-Test is a variation of cloze test where the second half of every second word is deleted. The number of words correctly reconstructed by the test taker is considered to be a measure of general language proficiency. In this pilot study the componential structure of an English C-Test consisting of two spoken-discourse passages and two written-discourse passages is investigated with the help of both unidimensional and multidimensional Rasch model. In a sample of 99 fairly advanced Iranian students of English the data fitted better the multidimensional partial credit model as defined in multidimensional random coefficients multinomial logit model (Adams, Wilson, & Wang, 1997) than Masters’ (1982) unidimensional partial credit model. This indicates that spoken-discourse and written-discourse C-Test passages form distinct dimensions. We argue that spoken-discourse C-Test texts may tap better into students’ listening/speaking skills than C-Test based solely on written discourse texts and that therefore C-Tests consisting of both conversational and written-discourse passages can more adequately operationalize the construct of general language proficiency than C-Tests containing only written discourse passages. Considering the small sample size of the study the findings should be interpreted cautiously.

Key words: multidimensional random coefficients multinomial logit model, C-Test, MIRT, structure of general language proficiency


Dr. Purya Baghaei
Islamic Azad University
Faculty of Foreign Languages
Ostad Yusofi St.
91886 Mashhad, Iran
pbaghaei@mshdiau.ac.ir

top


The systematic variation of task characteristics facilitates the understanding of task difficulty: A cognitive diagnostic modeling approach to complex problem solving
Samuel Greiff, Katarina Krkovic & Gabriel Nagy

Abstract

Since the 1960ies, when pioneering research on Item Response Theory (IRT) was published, considerable progress has been made with regard to the psychometrical quality of psychological assessment tools. One recent development building upon IRT is the introduction of Cognitive Diagnostic Modeling (CDM). The major goal of introducing CDM was to develop methods that allow for examining which cognitive processes are involved when a person is working on a specific assessment task. More precisely, CDM enables researchers to investigate whether assumed task characteristics drive item difficulty and, thus, person ability parameters. This may - at least according to the assumption inherent in CDM - allow conclusions about cognitive processes involved in assessment tasks. In this study, out of the numerous CDMs available the Least Square Distance Method (LSDM; Dimitrov, 2012) was applied to investigate psychometrical qualities of an assessment instrument measuring Complex Problem Solving (CPS) skills. For the purpose of the study, two task characteristics essential for mastering CPS tasks were identified ex-ante - degree of connectivity and presence of indirect effects by adding eigendynamics to the task. The study examined whether and how the two hypothesized task characteristics drive item difficulty of two CPS dimensions, knowledge acquisition and knowledge application. The sample consisted of 490 German high school students, who completed the computer-based CPS assessment instrument MicroDYN. The two task characteristics in MicroDYN items were varied systematically. Results obtained in LSDM indicated that the two hypothesized task characteristics, degree of connectivity and introducing indirect effects, drove item difficulty only for knowledge acquisition. Hence, other task characteristics that may determine item difficulty of knowledge application need to be investigated in future studies in order to provide a sound measurement of CPS.

Key words: item response theory; cognitive diagnostic modeling; least square distance method; complex problem solving; task characteristics


Katarina Krkovic, PhD
EMACS unit
University of Luxembourg
6, rue Richard Coudenhove Kalergi
1359 Luxembourg-Kirchberg, Luxembourg
katarina.krkovic@uni.lu

top


Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis
Russell T. Warne & Ross Larsen

Abstract

Exploratory factor analysis (EFA) is a widely used statistical method in which researchers attempt to ascertain the number and nature of latent factors that explain their observed variables. When conducting an EFA, researchers must choose the number of factors to retain - a critical decision that has drastic effects if made incorrectly. In this article, we examine a newly proposed method of choosing the number of factors to retain. In the new method, confidence intervals are created around each eigenvalue and factors are retained if the entire eigenvalue is greater than 1.0. Results show that this new method outperforms the traditional Guttman rule, but does not surpass the accuracy of Velicer’s minimum average partial (MAP) or Horn’s parallel analysis (PA). MAP was the most accurate method overall, although it had a tendency to underfactor in some conditions. PA was the second most accurate method, although it frequently overfactored. PA was also found to be sensitive to sample size and MAP was found to occasionally grossly overfactor; these findings had not previously been reported in the literature.

Key words: exploratory factor analysis, simulation study, Monte Carlo study, principal components analysis


Dr. Russell T. Warne
Department of Behavioral Science
Utah Valley University
800 W. University Parkway, Mail Code 115
Orem, UT 84058, USA
rwarne@uvu.edu

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften