NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2012-1

Psychological Test and Assessment Modeling, Volume 54, 2012 (1)

An analysis of the discrete-option multiple-choice item type
Neal M. Kingston, Gail C. Tiemann, Harold L. Miller Jr. & David Foster
Abstract | Startet den Datei-DownloadPDF of the full article

Teachers’ burnout is related to lowered speed and lowered quality for demanding short-term tasks
Tuulia M. Ortner
Abstract | Startet den Datei-DownloadPDF of the full article

Robustness of multidimensional analyses against local item dependence
Steffen Brandt
Abstract | Startet den Datei-DownloadPDF of the full article

The Genetics Lab: Acceptance and psychometric characteristics of a computer-based microworld assessing complex problem solving
Philipp Sonnleitner, Martin Brunner, Samuel Greiff, Joachim Funke, Ulrich Keller, Romain Martin, Cyril Hazotte, Hélène Mayer & Thibaud Latour
Abstract | Startet den Datei-DownloadPDF of the full article

Applying a construction rational to a rule based designed questionnaire using the Rasch model and LLTM
Manuel Reif
Abstract | Startet den Datei-DownloadPDF of the full article

Hypothesis testing and the error of the third kind
Dieter Rasch
Abstract | Startet den Datei-DownloadPDF of the full article

 


An analysis of the discrete-option multiple-choice item type
Neal M. Kingston, Gail C. Tiemann, Harold L. Miller Jr. & David Foster

Abstract
The discrete-option multiple-choice (DOMC) item type was developed to curtail cheating and reduce the impact of testwiseness, but to date there has been only one published study of its statistical characteristics, and that was based on a relatively small sample. This study was implemented to investigate the psychometric properties of the DOMC item type and systematically compare it with the traditional multiple-choice (MC) item type. Test forms written to measure high school-level mathematics were administered to 802 students from two large universities. Results showed that across all forms, MC items were consistently easier than DOMC items. Item discriminations between DOMC and MC items varied randomly, with neither performing consistently better than the other. Results of a confirmatory factor analysis was consistent with a single factor across the two item types.

Key words: computer-based testing, innovative item types, discrete-option multiple-choice, multiple-choice, testwiseness


Neal Kingston, PhD
Center for Educational Testing and Evaluation
Joseph R. Pearson Hall
1122 West Campus Road, Room 738
Lawrence, Kansas, 66045
nkingsto@ku.edu

top


Teachers’ burnout is related to lowered speed and lowered quality for demanding short-term tasks
Tuulia M. Ortner

Abstract
The present study addressed the relation between self-reported burnout experiences and objective performance in teachers. As an alternative to the distal long-term criteria that are usually used, we followed a new approach using tasks to assess samples of short-term behavior in simulated stressful situations. Tasks requiring low cognitive demand had to be solved in situations involving (a) task collision, (b) a hindrance of the scheduled course of action, or (c) awkward working conditions. Performance data and self-reported work-related experiences were obtained from 103 school teachers. Scales related to self-reported exhaustion revealed several significant medium correlations with objective performance speed as well as performance quality (between r = .23 and r = .30). Lower speed when hindered in their scheduled course of action and a lower quality of task effort under awkward working conditions were shown for teachers suffering from burnout.

Key words: Burnout, Objective Personality Test, Teacher, Stress Resistance, Performance, Exhaustion, Working Speed


Tuulia M. Ortner, PhD
Freie Universität Berlin
Department of Psychology
Division for Psychological Assessment
Habelschwerdter Allee 45
14195 Berlin, Germany
tuulia.ortner@fu-berlin.de

top


Robustness of multidimensional analyses against local item dependence
Steffen Brandt

Abstract
The negative impact of local item dependence (LID) on analyses using item response theory (IRT) has been investigated by many authors. Hitherto though, these investigations focused on unidimensional analyses. The objective of the simulation study presented here is to investigate the impact of LID on multidimensional analyses. The chosen simulation design considers tests with LID due to item bundles and compares the results of multidimensional analyses obtained with varying item bundle effect sizes, varying correlation levels for the latent traits, and different test designs. The results indicate that in multidimensional analyses LID results in a bias for the covariance estimation and that the direction of the bias interferes with the chosen test design.

Key words: test construction, item response theory, local item dependence, multidimensionality


Steffen Brandt, PhD
Ebereschenweg 28
24161 Altenholz, Germany
steffen.brandt@artofreduction.com

top


The Genetics Lab: Acceptance and psychometric characteristics of a computer-based microworld assessing complex problem solving
Philipp Sonnleitner, Martin Brunner, Samuel Greiff, Joachim Funke, Ulrich Keller, Romain Martin, Cyril Hazotte, Hélène Mayer & Thibaud Latour

Abstract
Computer-based problem solving scenarios or "microworlds” are contemporary assessment instruments frequently used to assess students’ complex problem solving behavior - a key aspect of today’s educational curricula and assessment frameworks. Surprisingly, almost nothing is known about their (1) acceptance or (2) psychometric characteristics in student populations. This article introduces the Genetics Lab (GL), a newly developed microworld, and addresses this lack of empirical data in two studies. Findings from Study 1, with a sample of 61 ninth graders, show that acceptance of the GL was high and that the internal consistencies of the scores obtained were satisfactory. In addition, meaningful intercorrelations between the scores supported the instrument’s construct validity. Study 2 drew on data from 79 ninth graders in differing school types. Large to medium correlations with figural and numerical reasoning scores provided evidence for the instrument’s construct validity. In terms of external validity, substantial correlations were found between academic performance and scores on the GL, most of which were higher than those observed between academic performance and the reasoning scales administered. In sum, this research closes an important empirical gap by (1) proving acceptance of the GL and (2) demonstrating satisfactory psychometric properties of its scores in student populations.

Key words: microworlds, complex problem solving, acceptance, computer-based testing, educational assessment


Philipp Sonnleitner, MSc
EMACS research unit
University of Luxembourg
Campus Walferdange
7201 Walferdange, Luxembourg
philipp.sonnleitner@uni.lu

top


Applying a construction rational to a rule based designed questionnaire using the Rasch model and LLTM
Manuel Reif

Abstract
The use of questionnaires is widespread in psychological assessment. Typically items are constructed more or less "intuitively” and their difficulty is determined with empirical studies. In order to improve the construct validity of questionnaire items an approach to constructing questionnaires by using construction rationals is demonstrated. This approach has its origins in the cognitive sciences and intelligence research and is applied in the present study for the purpose of constructing and validating a self-reporting extraversion scale. Therefore, a construction rational was developed as a basis for item generation allowing the prediction of the difficulty of an item. After establishing a Rasch model fitting item pool, the appropriateness of the developed construction rational was assessed by means of the linear logistic test model (LLTM). It was not possible to fully explain item difficulty with the proposed construction rational. Nevertheless, this approach is a reasonable method for constructing questionnaire items in a more rational rather than intuitive manner. A further benefit to be considered is that an appropriate construction rational would enable automatized item generation.

Key words: IRT, LLTM, Construction Rational, Extraversion, Questionnaire


Manuel Reif
Division of Psychological Assessment and Applied Psychometrics
Faculty of Psychology
University of Vienna
Liebiggasse 5
1010 Vienna, Austria
manuel.reif@univie.ac.at

top


Hypothesis testing and the error of the third kind
Dieter Rasch

Abstract
In this note it is shown that the concept of an error of the third kind (type-III-error) stems from a wrong understanding of the concept of hypothesis testing in mathematical statistics. If the alterna-tive hypothesis in a statistical testing problem states that the null hypothesis is wrong, then an error of the third kind cannot occur. This error can only be defined in more-decision-problems, i.e. prob-lems where we have to decide for one of more than two possibilities. Further the problem of the power of tests between a null and an alternative hypothesis is discussed. The notation of set theory is used because it allows for a concise notation of this type of statistical testing problems. The notation is explained in an appendix.

Key words: Hypothesis testing, more-decision-problems, Neyman-Pearson Theory, power of a test


Dieter Rasch, PhD
University of Life Sciences Vienna
renate-rasch@t-online.de

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften