NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2012-4

Psychological Test and Assessment Modeling, Volume 54, 2012 (4)

Screening for personality disorders: A new questionnaire and its validation using Latent Class Analysis
Julia Lange, Christian Geiser, Karl Heinz Wiedl & Henning Schöttke
Abstract | Startet den Datei-DownloadPDF of the full article

Nomothetic outcome assessment in counseling and psychotherapy: Development and preliminary psychometric analyses of the Depression/Anxiety Negative Affect Scale
Scott T. Meier
Abstract | Startet den Datei-DownloadPDF of the full article


Special topic:
Current issues in Educational and Psychological Measurement: Design, calibration, and adaptive testing - Part I
Guest Editors: Andreas Frey & Ulf Kröhne


Guest Editorial
Andreas Frey & Ulf Kröhne
Startet den Datei-DownloadPDF of the full article

Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration
Safir Yousfi & Hendryk F. Böhme
Abstract | Startet den Datei-DownloadPDF of the full article

On the importance of using balanced booklet designs in PISA
Andreas Frey & Raphael Bernhardt
Abstract | Startet den Datei-DownloadPDF of the full article

A multilevel item response model for item position effects and individual persistence
Johannes Hartig & Janine Buchholz
Abstract | Startet den Datei-DownloadPDF of the full article

Capitalization on chance in variable-length classification tests employing the Sequential Probability Ratio Test
Jeffrey M. Patton, Ying Cheng, Ke-Hai Yuan & Qi Diao
Abstract | Startet den Datei-DownloadPDF of the full article

Biased (conditional) parameter estimation of a Rasch model calibrated item pool administered according to a branched testing design
Klaus D. Kubinger, J. Steinfeld, M. Reif & T. Yanagida
Abstract | Startet den Datei-DownloadPDF of the full article

 


Screening for personality disorders: A new questionnaire and its validation using Latent Class Analysis
Julia Lange, Christian Geiser, Karl Heinz Wiedl & Henning Schöttke

Abstract
Background: We evaluated a new screening instrument for personality disorders. The Personality Disorder Screening (PDS) is a self-administered screening questionnaire that includes 12 items from the Personality Self Portrait (Oldham & Morris, 1990).
Sampling and methods: The data of n = 966 participants recruited from the non-clinical population and from different clinical settings were analyzed using latent class analysis.
Results: A 4-class model fitted the data best. It confirmed a classification model for personality disorders proposed by Gunderson (1984) and showed high reliability and validity. One class corresponded to "healthy” individuals (40.6 %), and one class to individuals with personality disorders (17.2 %). Two additional classes represented individuals with specific personality styles. Evidence for convergent validity was found in terms of strong associations of the classification with the Structured Clinical Interview (SCID-II) for diagnosing personality disorders. The latent classes also showed theoretically expected associations with membership in different subsamples.
Conclusions: The PDS shows promise as a new instrument for identifying different classes of personality disorder severity already at the screening stage of the diagnostic process.

Key words: personality disorders; psychological assessment; screening test; SCID-II; Latent Class Analysis


Julia Lange, PhD
Department of Psychology
University of Osnabrück
Knollstraße 15
49069 Osnabrück, Germany
julia.lange@uni-osnabrueck.de

top


Nomothetic outcome assessment in counseling and psychotherapy: Development and preliminary psychometric analyses of the Depression/Anxiety Negative Affect Scale
Scott T. Meier

Abstract

Negative affect (NA) plays a significant role in the initiation, persistence, and response to psychotherapy of many client problems (Moses & Barlow, 2006). This report describes the development of a brief NA measure, the Depression/Anxiety Negative Affect (DANA) scale, and preliminary analyses of its psychometric properties. An initial pool of DANA items was selected on the basis of a review of relevant literature about emotion science and counseling outcomes, related tests, and feedback from psychotherapists as part of a pilot test. The DANA was evaluated in two representative clinical samples where psychotherapists produced a total of 363 session ratings with 81 clients. DANA scores evidenced adequate internal consistency, evidence of convergent and discriminant validity, and sensitivity to change over the course of psychotherapy. Effect sizes (ES) of DANA scores consistently equaled or exceeded the average ES of .68 found for scales assessing the outcomes of counseling and psychotherapy in meta-analytic studies (Smith & Glass, 1977). ESs greater than 1 were found on DANA variables for clients whose therapists rated them as experiencing, rather than avoiding, NA.

Key words: Sensitivity to change, negative affect, counseling and psychotherapy outcomes


Scott T. Meier
Department of Counseling, School, & Educational Psychology
409 Baldy Hall
University at Buffalo
Buffalo, NY, 14260, USA
stmeier@buffalo.edu

top


Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration
Safir Yousfi  & Hendryk F. Böhme

Abstract

Item responses can be context-sensitive. Consequently, composing test forms flexibly from a calibrated item pool requires considering potential context effects. This paper focuses on context effects that are related to the item sequence. It is argued that sequence effects are not necessarily a violation of item response theory but that item response theory offers a powerful tool to analyze them. If sequence effects are substantial, test forms cannot be composed flexibly on the basis of a calibrated item pool, which precludes applications like computerized adaptive testing. In contrast, minor sequence effects do not thwart applications of calibrated item pools. Strategies to minimize the detrimental impact of sequence effects on item parameters are discussed and integrated into a nomenclature that addresses the major features of item calibration designs. An example of an item calibration design demonstrates how this nomenclature can guide the process of developing a calibrated item pool.

Key words: context effects, sequence effects, item calibration design, item pool development


Dr. Safir Yousfi
Psychological Research and Development
German Federal Employment Agency
Regensburger Strasse 104
90478 Nuremberg, Germany
safir.yousfi@arbeitsagentur.de

top


On the importance of using balanced booklet designs in PISA
Andreas Frey  & Raphael Bernhardt

Abstract

The effect of using a balanced compared to an unbalanced booklet design on major PISA results was examined. The responses of 39,573 students who participated in the PISA-E 2006 assessment in Germany were re-analyzed. Using an unbalanced booklet design instead of the original booklet design led to an increase in mean reading performance of about six points on the PISA scale and altered the gender gap in reading to different degrees in the 16 federal states of Germany. For students with an immigration background, the reading performance was significantly higher for the unbalanced design than for the original design. For the unbalanced design, the relationship between self-reported effort while taking the test and reading performance was higher compared to the original design. The results underline the importance of using a balanced booklet design in PISA in order to avoid or minimize bias in population parameters estimates.

Key words: booklet design, testing, large-scale assessment, item response theory, Programme for International Student Assessment


Andreas Frey, PhD
Institute of  Educational Science
Department of Research Methods in Education
Friedrich-Schiller-University Jena
Am Planetarium 4
07737 Jena, Germany
andreas.frey@uni-jena.de

top


A multilevel item response model for item position effects and individual persistence
Johannes Hartig  & Janine Buchholz

Abstract

The paper presents a multilevel item response model for item position effects. It includes individual differences regarding the position effect to which we refer to as the persistence of the test-takers. The model is applied to published data from the PISA 2006 science assessment. We analyzed responses to 103 science test items from N = 64.251 students from 10 countries selected to cover a wide range of national performance levels. All effects of interest were analyzed separately for each country. A significant negative effect of item position on performance was found in all countries, which is more prominent in countries with a lower national performance level. The individual differences in persistence were relatively small in all countries, but more pronounced in countries with lower performance levels. Students’ performance level is practically uncorrelated with persistence in high performing countries, while it is negatively correlated within low performing countries.

Key words: item response theory, item position effects


Johannes Hartig, PhD
German Institute for International Educational Research (DIPF)
Schloßstr. 29
60486 Frankfurt am Main, Germany
hartig@dipf.de

top


Capitalization on chance in variable-length classification tests employing the Sequential Probability Ratio Test
Jeffrey M. Patton, Ying Cheng, Ke-Hai Yuan & Qi Diao

Abstract

The sequential probability ratio test (SPRT) is a popular termination criterion for variable-length classification tests. The SPRT is often paired with cut-based item selection in which item information is maximized at the cut point. However, items are chosen on the basis of their parameter estimates, and capitalization on chance may occur. We investigated the effects of capitalization on chance on test length and classification accuracy in several variable-length test simulations. In addition to capitalizing on large discrimination estimates, the item selection criterion chose items with difficulty estimates systematically higher or lower than their true difficulty values. This capitalization on chance had non-negligible effects on both test length and classification accuracy and induced an inverse relationship between them, though the particular effects were highly sensitive to the cut location. The results also indicate that implementing item exposure control effectively reduced the effects of capitalization on chance on testing outcomes.

Key words: sequential probability ratio test, classification testing, variable-length testing, capitalization on chance, item calibration error


Jeffrey M. Patton, M.Ed.
University of Notre Dame
209 Haggar Hall
Notre Dame, IN 46556, USA
jpatton1@nd.edu

top


Biased (conditional) parameter estimation of a Rasch model calibrated item pool administered according to a branched testing design
Klaus D. Kubinger, J. Steinfeld, M. Reif & T. Yanagida

Abstract

With reference to Glas (1988), this paper deals with the problem of biased conditional maximum likelihood (CML) Rasch model item parameter estimation when administering the items of a test according to any branched testing design. Specifically, the design of the widely used intelligence test-battery AID (Adaptive Intelligence Diagnosticum; see the last edition by Kubinger, 2009) is focused. The paper illustrates, firstly, why CML estimation leads to biased item parameter estimations given the branched testing design. Secondly, it highlights how big the bias is, and thirdly, in turn, how the biased item parameter estimations influence ability parameter estimation and therefore also the respective percentiles and T-scores of the testees. The results support the recommendation that any branched testing design should be examined in advance as to whether or not the resulting CML-based ability parameter estimations are biased in a relevant manner - before being used for psychological consultations.

Key words: branched testing, Rasch model, Adaptive Intelligence Diagnosticum (AID), conditional maximum likelihood (CML) estimation, marginal maximum likelihood (MML) estimation


Prof. Klaus D. Kubinger, PhD
Division of Psychological Assessment and Applied Psychometrics
Faculty of Psychology
University of Vienna
Liebiggasse 5
A-1010 Vienna, Austria
klaus.kubinger@univie.ac.at

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften