NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...
 

Published under Creative Commons: CC-BY-NC Licence


2021-4

Note: Reducing the risk of lucky guessing as well as avoiding the contamination of speed and power in (paper-pencil) group-testing – illustrated by a new test-battery
Klaus D. Kubinger
PDF of the full article


Conditions leading to the observation of a difficulty effect and its consequence for confirmatory factor analysis
Karl Schweizer, Christine DiStefano, Stefan Troche
PDF of the full article


Rasch Joint Maximum Likelihood Estimation Algorithms and Missing Data
Adam E. Wyse
PDF of the full article


Are you Swiping, or Just Marking? Exploring the Feasibility of Psychological Testing on Mobile Devices
Marco Koch, Corina Möller, Frank M. Spinath
​​​​​​​PDF of the full article


Chances and Psychometric Limits of Questionnaires for Field-Specific Interest: An Example from Mechanical Engineering
Lisbeth Weitensfelder, Ilona Herbst
​​​​​​​PDF of the full article


Simulation-Based Learning of Complex Skills: Predicting Performance With Theoretically Derived Process Features
Laura Brandl, Constanze Richters, Anika Radkowitsch,Andreas Obersteiner, Martin R. Fischer, Ralf Schmidmaier, Frank Fischer, Matthias Stadler
​​​​​​​PDF of the full article​​​​​​​


 

Note: Reducing the risk of lucky guessing as well as avoiding the contamination of speed and power in (paper-pencil) group-testing – illustrated by a new test-battery

Klaus D. Kubinger


Abstract
This Note illustrates how two typical problems can be solved when psychological test administration shall be carried out in a group of testees simultaneously, rather than only individually. That is, group-testing (by paper and pencil) commonly uses both items with a multiple-choice response format as well as time limits for working on the items. The test-battery AID-G (Intelligence Diagnosticum for Group administration; Kubinger & Hagenmüller, 2019) firstly shows that multiple-choice response formats which reduce the probability of lucky guessing, actually work in practice. Moreover, even the use of a free-response format is occasionally manageable, though this is hardly established in other tests for group administration. Secondly, it shows that two IRT- (item response theory-) based options are actually realizable in practice to avoid measurement of “power” being contaminated with “speed”: Only the items the testee actually worked on are scored and, optionally, the completion time for a test is restricted to the time the slowest testee of the group needs until he/she has worked on a defined minimum number of items. Incidentally, this test-battery also allows the application of various test versions with different levels of item difficulties when a respective adaption, testee by testee, is desirable within the group.

Keywords: Rasch model, speed and power, multiple-choice, lucky guessing, group-testing


Klaus D. Kubinger, PhD.
Professorial Research Fellow
University of Vienna
Faculty of Psychology
Liebiggasse 5
1010 Vienna
Austria
klaus.kubinger@univie.ac.at



Conditions leading to the observation of a difficulty effect and its consequence for confirmatory factor analysis

Karl Schweizer, Christine DiStefano, and Stefan Troche


Abstract
Psychometric research has posited that difficulty effects related to item-level input for factor analysis serves as the precondition for observing a difficulty factor. Two studies are reported that investigated and confirmed this hypothesis. First, it was demonstrated that extreme and same-sized extreme difficulty levels of items resulted in deviations of the input to factor analysis from the expected systematic variation. Difficulty levels, as defined by McDonald and Ahlawat (1974), that were close to the upper limit for such levels were used for this purpose. Subsequently, it was demonstrated that data with this effect were likely to show model misfit in structural investigations by the one-factor CFA model. According to these results the difficulty effect is a method effect caused by the difficult factor condition of data. This condition is the source of additional systematic variation that is not accounted for by the intended latent variable. This additional variation means model misfit unless it is captured by a difficulty factor.

Keywords: difficulty effect, difficulty factor, factor analysis, method effect


Karl Schweizer
Institute of Psychology,
Goethe University Frankfurt
Theodor-W.-Adorno-Platz 6
60323 Frankfurt a. M.
Germany
K.Schweizer@psych.uni-frankfurt.de

 

 


 

Rasch Joint Maximum Likelihood Estimation Algorithms and Missing Data

Adam E. Wyse


Abstract
This article examines two approaches for performing joint maximum likelihood estimation with the Rasch model and how these estimation algorithms may be impacted by the amount and type of missing data. The two estimation algorithms include the Newton-Raphson procedure and a proportional curve fitting algorithm. Using simulated data from two different credentialing programs, we found that the amount and type of missing data can impact the amount of error and variability observed in item and person parameters. However, we found that the proportional curve fitting and Newton-Raphson algorithms tended to give virtually identical results. The only differences between the two algorithms were when missing data were created using a computerized adaptive testing algorithm and there were less than 50 scored item responses. In some of these cases, there were very small differences between the two algorithms with the proportional curve fitting algorithm performing slightly better. It is suggested that in most practical applications that one should expect very similar results no matter what algorithm is employed to estimate item and person parameters.

Keywords: Rasch model, estimation algorithms, missing data, joint maximum likelihood


Adam E. Wyse, Ph.D.
1813 Chatham Ave
Arden Hills
MN 55112
adam.wyse[at].renaissance.com

 

 


 

Are you Swiping, or Just Marking? Exploring the Feasibility of Psychological Testing on Mobile Devices

Marco Koch, Corina Möller, Frank M. Spinath


Abstract
Despite the many benefits of computer-based testing, many existing computer-based tests employ response formats that could be used equally in paper-pencil tests. In this article we explore the feasibility of psychological testing on mobile devices, an approach that combines the advantages of computer-based testing with the flexibility of paper-pencil tests. As an example, we present the Attention Swiping Task (AST) for assessing sustained attention on mobile devices in proctored or self-administered settings. N = 114 university students were tested with the AST, another test measuring sustained attention (FAIR-2), and a figural matrices test (DESIGMA) measuring participants’ intelligence (IQ) to evaluate the psychometric properties and construct validity of the AST. Results indicated that the AST had a satisfying distribution of item difficulties (MDiff = .58, SDDiff = .36) and part-whole correlations (MPWC = .55, SDPWC = .11), and an excellent reliability (rtt = .99). Moreover, test indices of the AST were highly correlated with the FAIR-2. Furthermore, there were small significant positive correlations between AST test indices and participants’ IQ (r = .23 - .25, p = .04 - .02). These results indicate that the AST can be reliably applied for measuring sustained attention by means of mobile devices. Moreover, in contrast to existing tests of sustained attention the AST can be customized easily, is applicable for (self-administered) online studies, and its source code is released freely under the GNU GPLv3 license. This also serves as foundation for the development of further psychological tests for mobile devices.

Keywords: sustained attention, computer-based test (CBT), test development and evaluation, cognitive abilities, mobile device adoption


Marco Koch
Campus A1.3
66123 Saarbrücken
Germany
marco.koch@uni-saarland.de

 

 



Chances and Psychometric Limits of Questionnaires for Field-Specific Interest: An Example from Mechanical Engineering

Lisbeth Weitensfelder & Ilona Herbst


Abstract
While interest questionnaires based on Holland's RIASEC-model face the challenge of finding an optimal congruence measure to match a person’s interests with the environment, tailor-made specific interest questionnaires avoid this problem in a pragmatic way: Instead of resulting in a general interest profile, only the interests needed in the particular field are assessed. The particular field can then be assessed in more detail, while other areas and therefore a typological interest are not looked for. The article presents construction, implementation and psychometric analyses (i.e. according to the Rasch model) of a field-specific interest questionnaire for mechanical engineering, focussing on the struggle of how to create a uni-dimensional and therefore
fair measurement.

Keywords: field-specific interests; interest assessment; questionnaire design; Rasch model


Lisbeth Weitensfelder
Department of Environmental Health
Center for Public Health
Medical University of Vienna
lisbeth.weitensfelder@meduniwien.ac.at


 


Simulation-Based Learning of Complex Skills: Predicting Performance With Theoretically Derived Process Features

Laura Brandl, Constanze Richters, Anika Radkowitsch, Andreas Obersteiner, Martin R. Fischer, Ralf Schmidmaier, Frank Fischer, Matthias Stadler


Abstract
Simulation-based learning is often used to facilitate complex problem-solving skills, such as collaborative diagnostic reasoning (CDR). Simulations can be especially effective if additional instructional support is provided. However, adapting instructional support to the learners’ needs remains a challenge when performance is only assessed as the outcome after using the simulation. Researchers are, therefore, increasingly interested in whether process data analyses can predict outcomes of simulated learning tasks and whether such analyses allow early identification of the need for support. This study developed a random forest classification model based on theoretically derived process indicators to predict success in a simulated learning environment. The context of the simulated learning environment was medicine. Internists interacted with a simulated radiologist to identify possible causes of an illness. Participants’ CDR was conceptualized via log-data, coded on a broad, domain-general level for better generalizability. Results showed a satisfactory prediction rate for CDR performance, indicated by diagnostic accuracy. The model predicted accurate and inaccurate diagnoses and was therefore suitable for making statements about the performance by only using process data of CDR. The findings contribute to the development of more adaptive instructional support within simulation-based learning through being able to predict the individuals’ learning outcomes already during the process.

Keywords: simulation-based learning, complex problem solving, learning analytics, process-based performance prediction, adaptive instructional support


Laura Brandl
Leopoldstr. 13
80802 München
Germany.
L.Brandl@psy.lmu.de

 


 


Psychological Test and Assessment Modeling
Volume 63 · 2021 · Issue 4

Pabst, 2021
ISSN 2190-0493 (Print)
ISSN 2190-0507 (Internet)

» Psychological Test and Assessment Modeling Online-Shop...





alttext    

 

Aktuell

Socials

Fachzeitschriften