NEWSBÜCHERJOURNALESHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...


2017-1

The probability distribution of the response times in self-paced continuous search tasks
Ad van der Ven, Risto Hotulainen & Helena Thuneberg
Abstract | Startet den Datei-DownloadPDF of the full article



Special Issue:
Current Methodological Issues in Educational Large-Scale Assessments - Part II

Guest editors: Matthias Stadler, Samuel Greiff & Sabine Krolak-Schwerdt

Guest Editorial
Matthias Stadler, Samuel Greiff & Sabine Krolak-Schwerdt
Startet den Datei-DownloadPDF of the full article

Large-scale assessments: potentials and challenges in longitudinal designs
Jutta von Maurice, Sabine Zinn & Ilka Wolter
Abstract | Startet den Datei-DownloadPDF of the full article

Design considerations for planned missing auxiliary data in a latent regression context
Leslie Rutkowski
Abstract | Startet den Datei-DownloadPDF of the full article

Recent IRT approaches to test and correct for response styles in PISA background questionnaire data: a feasibility study
Lale Khorramdel, Matthias von Davier, Jonas P. Bertling, Richard D. Roberts & Patrick C. Kyllonen
Abstract | Startet den Datei-DownloadPDF of the full article

Using Rasch model generalizations for taking testees’ speed, in addition to their power, into account
Christine Hohensinn & Klaus D. Kubinger
Abstract | Startet den Datei-DownloadPDF of the full article

An item response theory analysis of problem-solving processes in scenario-based tasks
Zhan Shu, Yoav Bergner, Mengxiao Zhu, Jiangang Hao & Alina A. von Davier
Abstract | Startet den Datei-DownloadPDF of the full article

 

 


The probability distribution of the response times in a self-paced continuous search task
Ad van der Ven, Risto Hotulainen & Helena Thuneberg

Abstract

When psychologists began to use intelligence tests, they also used simple, overlearned tasks to determine the pattern of individual reaction times (RT). Measures of RT variation were proposed as possible indicators of intelligence. However, a fundamental question has remained partly unanswered: Is there an existing theory that explains individual RT variation? In this paper, a theory is proposed for the response times obtained in the Attention Concentration Test. The test consists of two different conditions: a fixed condition and a random condition. For each of these two conditions a different RT model was developed both based on the basic assumption that the individual response times have an approximately shifted exponential distribution. Empirical data was obtained from two different samples (N = 362, N = 334) of Finnish students. The method used to check the validity of each model involved computing the intercept and slope of the linear regression of the standard deviation from the stationary response times on the mean corrected for shift. In this regression analysis, the standard deviation is the dependent variable and the mean corrected for shift the independent variable. The shift parameter was estimated by using the smallest reaction time. The observed intercept and slope were compared with the predicted intercept and slope according to the proposed models. The model for the fixed conditionof the test did not hold. The model for the random condition,however, did. The findings were interpreted according to the arrangement of the targets as they occurred in each bar.

Keywords: Exponential distribution, Erlang distribution, Solleveld distribution, Solleveld-Erlang distribution, continuous performance tests, attention concentration test


Ad van der Ven, PhD
Learning and Development
Department of Pedagogy
Radboud University Nijmegen
Montessorilaan 3
6500 HE Nijmegen, the Netherlands
a.vanderven@pwo.ru.nl

top


Large-scale assessments: potentials and challenges in longitudinal designs
Jutta von Maurice, Sabine Zinn & Ilka Wolter

Abstract

The article elaborates on the benefits and challenges of implementing a longitudinal design into large-scale assessments in educational research. Thus, the focus lies on educational trajectories and competence development within the population under study as well as the relevant processes behind them. Taking the starting cohort of ninth graders of the German National Educational Panel Study as an example, more detailed information is given on the aspect of sampling and selectivity at the start of a longitudinal large-scale study, as well as tracking and bias while keeping a panel running. Concerning instrumentation, challenges and methods connected with the measurement of competence development and the valid recording of individual biographies are discussed.

Keywords: longitudinal research, competence development, educational trajectories sampling, selectivity


Jutta von Maurice, PhD
Leibniz Institute for Educational Trajectories
Executive Director of Research
Wilhelmsplatz 3
96047 Bamberg, Germany, WP3/02.41
jutta.von-maurice@lifbi.de

top


Design considerations for planned missing auxiliary data in a latent regression context
Leslie Rutkowski

Abstract

Although variations of a multiple-matrix sampling approach have been used in large-scale assess-ments for the design of achievement instruments, it is only recently that item sampling has been used to extend content coverage of the student background questionnaire. In 2012, PISA imple-mented a so-called 3-form design, whereby four sets of background questionnaire items were ad-ministered. This design reduced the time required to respond to each questionnaire by about 25% (30 minutes compared to 41 minutes, for all questions). An open problem for future rounds and assessments surrounds whether and how to deal with missing background data when unbiased and sufficiently precise achievement estimation is paramount. Imputation of background questionnaire data prior to estimating achievement is one means for treating these data; however, concerns over a sensible imputation model and preserving the quality of achievement estimates loom large. In the current paper, I take one step back and consider a precursor to statistical solutions for planned missing data. That is, I discuss possible questionnaire designs that create a more reasonable foundation from which to impute missing background questionnaire data. Among the design features discussed, I consider splitting constructs across questionnaires, planning missing among well-correlated constructs, and administering intensive questionnaires to a smaller subsample (so called "two-method” design). In each case, I consider the feasibility of each design against the backdrop of information gains and the multidimensional burden of preserving achievement distributions.

Key words: Planned missing designs, background questionnaires, rotated questionnaires


Leslie Rutkowski, PhD
Centre for Educational Measurement at University of Oslo
Postboks 1161 Blindern
0318 OSLO, Norway
leslie.rutkowski@cemo.uio.no

top


Recent IRT approaches to test and correct for response styles in PISA background questionnaire data: a feasibility study
Lale Khorramdel, Matthias von Davier, Jonas P. Bertling, Richard D. Roberts & Patrick C. Kyllonen

Abstract

A relatively new item response theory (IRT) approach (Böckenholt, 2012) and its multidimensional extension (Khorramdel & von Davier, 2014; von Davier & Khorramdel, 2013) to test and correct for response styles was applied to international large-scale assessment data - the Programme for International Student Assessment 2012 field trial - for the first time. The responses of n = 17,552 students at age 15 from 63 different countries to the two personality scales of openness and perseverance (student questionnaire) were examined, and bias from an extreme response style (ERS) and midpoint response style (MRS) was found. The aim of the study is not to report country level results but to look at the potential of this methodology to test for and correct response style bias in an international context. It is shown that personality scales corrected for response styles can lead to more valid test scores, addressing the "paradoxical relationship” phenomenon of negative correlations between personality scales and cognitive proficiencies. ERS correlates negatively with the cognitive domains of mathematics and problem solving on the country mean level, while MRS shows positive correlations.

Key words: Bifactor model, large-scale assessment, multidimensional item response theory (MIRT), rating scale, response style


Lale Khorramdel, PhD
Educational Testing Service
Princeton, NJ 08541, USA
lkhorramdel@ets.org

top


Using Rasch model generalizations for taking testees’ speed, in addition to their power, into account
Christine Hohensinn & Klaus D. Kubinger

Abstract

It is common practice in several achievement and intelligence tests to credit quick solutions with bonus points in order to gain more information about a testee’s ability. However, using models of item response theory (IRT) for respective approaches is rather rare. Within IRT, the main question is whether speed and power do actually measure unidimensionally, that is, the same ability. In this paper, analyses were carried out in a sample of 9210 7th grade students, participants of an optional assessment, Informal K[/C]ompetence Measurement (IKM), within the programme of the Austrian Educational Standards. The following models were used: Rasch’s multi-dimensional polytomous model as well as his unidimensional polytomous model (Rasch, 1961) (see also Fischer, 1974, and Kubinger, 1989); and Fischer’s speed-and-power two-steps model (Fischer, 1973; see again also Kubinger, 1989), which has never been applied since its introduction. The first one modelizes speed and power in a joint measurement approach, meaning another ability/dimension is postulated for several combinations of power performance and speed performance. The unidimensional model additionally hypothesizes that the respective combinations, in other words "response categories”, all refer to the same ability and differ only in a graded manner. Fischer’s model considers speed and power as two completely independent abilities, for each of which the dichotomous Rasch model applies. Apart from model tests, information criteria are applied in order to reveal which model meets the best validness.

Keywords: Missing values, Rasch model, model fit, multicategorical IRT models, speed and power


Christine Hohensinn, PhD
Division of Psychological Assessment and Applied Psychometrics
Faculty of Psychology
University of Vienna
Liebiggasse 5
1010 Vienna, Austria
christine.hohensinn@univie.ac.at

top


An item response theory analysis of problem-solving processes in scenario-based tasks
Zhan Shu, Yoav Bergner, Mengxiao Zhu, Jiangang Hao & Alina A. von Davier

Abstract

Advances in technology result in evolving educational assessment design and implementation. The new generation assessments include innovative technology-enhanced items, such as simulations and game-like tasks that mimic an authentic learning experience. Two questions that arise along with the implementation of the technology-enhanced items are: (1) what data and their associated features may serve as meaningful measurement evidence, and (2) how to statistically and psycho-metrically characterize new data and reliably identify their features of interest. This paper focuses on one of the new data types, process data, which reflects students’ procedure of solving a problem. A new model, a Markov-IRT model, is proposed to characterize and capture the unique features of each individual’s response process during a problem-solving activity in scenario-based tasks. The structure of the model, its assumptions, the parameter space, and the estimation of the parameters are discussed in this paper. Furthermore, we illustrate the application of the Markov-IRT model, and discuss its usefulness in characterizing students’ response processes using an empirical example based on a scenario-based task from the NAEP-TEL assessment. Lastly, we illustrate the identification and extraction of features of the students’ response processes to be used as evidence for psychometric measurement.

Key words: Markov process, IRT modeling, Scenario-based task


Zhan Shu, PhD
Educational Testing Service
660 Rosedale Road
Princeton, NJ 08541, USA
zshu@ets.org

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext