"Several of the new demands and requirements stem from Large-Scale Assessments (LSAs). In LSAs, very large samples are examined; often under the objectives of deriving sound comparisons between quite different populations like countries and of drawing far-reaching inferences. The general objective of the Programme for International Student Assessment (PISA), for example, is to answer the rather general question of how well prepared students are to participate in society. The combination of examining very large samples, the desire for the comparison of rather different populations, and the aim to infer farreaching interpretations creates a couple of demanding methodological challenges.
Important methodological challenges that have not yet been answered sufficiently concern aspects of complex test designs used to distribute test items to participants, the handling of unwanted item context effects on both item parameter estimates and test performance, the calibration of data sets assessed with complex study designs, and the application of computerized adaptive testing (CAT) in order to meet specific diagnostic needs.
The special topic "Current Issues in Educational and Psychological Measurement: Design, Calibration, and Adaptive Testing" of Psychological Test and Assessment Modeling assembles a series of research papers covering these areas. The general methodological approach used in all papers is the Item Response Theory (IRT). The special topic is spread over two issues of
Psychological Test and Assessment Modeling (4/2012, 1/2013).