NEWSBÜCHERJOURNALEONLINE-SHOP



 

Sie befinden sich hier: JOURNALE » Psychological Test and Assessment Modeling » Currently available » Inhalt lesen

« zurück

Psychological Test and Assessment Modeling

» Psychological Test and Assessment Modeling Online-Shop...


2007-2

Contents, Volume 49, 2007, Issue 2


KLAUS D. KUBINGER, DIETER RASCH & MARIE ŠIMECKOVA
Testing a correlation coefficient’s significance: Using H0: 0 < ρλ is preferable to H0: ρ = 0
Abstract | Startet den Datei-DownloadPDF of the full article

OLGA KUNINA, OLIVER WILHELM, MAREN FORMAZIN, KATHRIN JONKMANN & ULRICH SCHROEDERS
Extended criteria and predictors in college admission: Exploring the structure of study success and investigating the validity of domain knowledge
Abstract | Startet den Datei-DownloadPDF of the full article

JOACHIM HÄUSLER, MARKUS SOMMER & STEFAN CHROUST
Optimizing technical precision of measurement in computerized psychological assessment on Windows platforms
Abstract | Startet den Datei-DownloadPDF of the full article

HAGEN C. FLEHMIG, MICHAEL STEINBORN, ROBERT LANGNER, ANJA SCHOLZ & KARL WESTHOFF
Assessing intraindividual variability in sustained attention: reliability, relation to speed and accuracy, and practice effects
Abstract | Startet den Datei-DownloadPDF of the full article

MARCUS ROTH & PHILIPP YORCK HERZBERG
The resilient type: ‘Simply the best’ or merely an artifact of social desirability?
Abstract | Startet den Datei-DownloadPDF of the full article

RENÉ T. PROYER
Convergence of conventional and behavior-based measures: Towards a multimethod approach in the assessment of vocational interests
Abstract | Startet den Datei-DownloadPDF of the full article

 


Testing a correlation coefficient’s significance: Using H0: 0 < ρλ is preferable to H0: ρ = 0
KLAUS D. KUBINGER , DIETER RASCH  & MARIE ŠIMECKOVA

Abstract
This paper proposes an alternative approach in correlation analysis to significance testing. It argues that testing the null-hypotheses H0: ρ = 0 versus the H1: ρ > 0 is not an optimal strategy. This is because rejecting the null-hypothesis, as traditionally reported in social science papers - i.e. a significant correlation coefficient (regardless of how many asterisks it is adorned with) - very often has no practical meaning. Confirming a population’s correlation coefficient as being merely unequal to zero does not entail much gain of content information, unless this correlation coefficient is sufficiently large enough; and this in turn means that the correlation explains a relevant amount of variance. The alternative approach, however, tests the composite null-hypothesis H0: 0 < ρλ for any 0 < λ < 1 instead of the simple null-hypothesis H0: ρ = 0. At best, the value of λ is chosen as the square root of the expected relative proportion of variance which is explained by a linear regression, the latter being the so-called coefficient of determination, ρ2. The respective test statistic is only asymptotically normally distributed: in this paper a simulation study is used to prove that the factual risk is hardly larger than the nominal risk with regard to the type-I-risk. An SPSS-syntax is given for this test, in order to fill the respective gap in the existing SPSS-menu. Furthermore it is demonstrated, how to calculate the sample size according to certain demands for precision. By applying the approach given here, no "use of asterisks” would lead a researcher astray - that is to say, would cause him/her to misinterpret the type-I-risk or, even worse, to overestimate the importance of a study because he/she has misjudged the content relevance of a given correlation coefficient ρ > 0.

Key words: null-hypotheses testing, correlation coefficient, coefficient of determination, linear dependency, sample size determination


Klaus D. Kubinger, PhD
Psychological Assessment and Applied Psychometrics
Center of Testing and Consulting
Faculty of Psychology
University of Vienna
Liebiggasse 5
1010 Vienna
Austria, Europe
Öffnet ein Fenster zum Versenden einer E-Mailklaus.kubinger@univie.ac.at

Dieter Rasch, DSc
Institute of Statistics
University of Agriculture, Vienna
Gregor Mendel Straße 33
1180 Vienna
Austria, Europe
Öffnet ein Fenster zum Versenden einer E-Maildieter.rasch@boku.ac.at

Marie Šimeckova
Institute of Animal Science
Praha - Uhríneves
Prátelství 815, 104 01 Praha 114
Czechia, Europe
Öffnet ein Fenster zum Versenden einer E-Mailsimeckova.marie@vuzv.cz

top


Extended criteria and predictors in college admission: Exploring the structure of study success and investigating the validity of domain knowledge
OLGA KUNINA, OLIVER WILHELM, MAREN FORMAZIN, KATHRIN JONKMANN  & ULRICH SCHROEDERS

Abstract
The utility of aptitude tests and intelligence measures in the prediction of the success in college is one of the empirically best supported results in ability research. However, the structure of the criterion "study success” has not been appropriately investigated so far. Moreover, it remains unclear which aspect of intelligence - fluid intelligence or crystallized intelligence - has the major impact on the prediction. In three studies we have investigated the dimensionality of the criterion achievements as well as the relative contributions of competing ability predictors. In the first study, the dimensionality of college grades was explored in a sample of 629 alumni. A measurement model with two correlated latent factors distinguishing undergraduate college grades on the one hand from graduate college grades on the other hand had the best fit to the data. In the second study, a group of 179 graduate students completed a Psychology knowledge test and provided available college grades in undergraduate studies. A model separating a general latent factor for Psychology knowledge from a nested method factor for college grades, and a second nested factor for "experimental orientation” had the best fit to the data. In the third study the predictive power of domain specific knowledge tests in Mathematics, English, and Biology was investigated. A sample of 387 undergraduate students in this prospective study additionally completed a compilation of fluid intelligence tests. The results of this study indicate as expected that: a) ability measures are incrementally predictive over school grades in predicting exam grades; and b) that knowledge tests from relevant domains were incrementally predictive over fluid intelligence. The results of these studies suggest that criteria for college admission tests deserve and warrant more attention, and that domain specific ability indicators can contribute to the predictive validity of established admission tests.

Key words: College admission, incremental validity, knowledge tests, fluid intelligence, crystallized intelligence


Olga Kunina
IQB (Institute for Educational Progress)
Humboldt-Universität zu Berlin
Jägerstr. 10/11
10117 Berlin
Germany
Öffnet ein Fenster zum Versenden einer E-Mailolga.kunina@iqb.hu-berlin.de

top


Optimizing technical precision of measurement in computerized psychological assessment on Windows platforms
JOACHIM HÄUSLER , MARKUS SOMMER & STEFAN CHROUST

Abstract
Reaction times and response latencies are required to measure a variety of ability and personality traits. If reaction times are used to measure rather elementary cognitive tasks, the inter-individual variance in the measured reaction times are usually small in the sense that the central 50 percent of a norm population range within less than 100ms. Technical measurement errors therefore have the potential to seriously affect the validity of diagnostic judgments based on such measures. Thus the target of this paper is to investigate the magnitude of possible errors of measurement due to technical reasons and to suggest ways to prevent or at least consider those in the diagnostic process.
In Study I a highly precise 'artificial respondent' was applied to simulate reactions corresponding to a given percentile rank on 3 different tests (DG-Lokation CORPORAL, Alertness TAP-M, RT/S9 Vienna Test System) on 11 different computer systems. The result output of the tests was compared to the reaction times, actually provided by the artificial respondent. Results show, that there are detectable errors of measurement - depending on the hardware and software specifications of the computer system used. In the test DG-Lokation these bias caused an offset in the tests main variable of up to 20 percentile ranks.
In Study II a self-calibration unit which is part of the Vienna Test System (Version 6.40) was investigated, using the same experimental setup. After calibration, the bias detected can be reduced to the magnitude of about 1 percentile rank on all computer systems tested.
It thus can be concluded, that time critical computer based tests typically bear the risk of technical errors of measurement. Depending on how the test is programmed, the errors arising on some computer configurations can cause even severe changes in diagnostic judgment formation. In contrast, self-calibration proved to be an effective tool to permitting the user not only to control but also to ensure the precision of measurement, independent of the properties of the computer system he is administering his test on.

Key words: computerised testing, technical precision of measurement, self-calibration, reaction tests


Mag. Joachim Häusler
Hyrtlstraße 45
2340 Mödling
Austria
Öffnet ein Fenster zum Versenden einer E-Mailhaeusler@schuhfried.at

top


Assessing intraindividual variability in sustained attention: reliability, relation to speed and accuracy, and practice effects
HAGEN C. FLEHMIG, MICHAEL STEINBORN, ROBERT LANGNER, ANJA SCHOLZ & KARL WESTHOFF

Abstract
We investigated the psychometric properties of competing measures of sustained attention. 179 subjects were assessed twice within seven day's time with a test designed to measure sustained attention, or concentration, respectively. In addition to traditional performance indices [i.e., speed (MRT) and accuracy (E%)], we evaluated two intraindividual response time (RT) variability measures: standard deviation (SDRT) and coefficient of variation (CVRT). For the overall test, both indices were reliable. SDRT showed good to acceptable retest reliability for all subtests. For CVRT, retest reliability coefficients ranged from very good to not satisfactory. While the reversed-word recognition test proved highly reliable, the mental calculation test and the arrows test were not sufficiently reliable. CVRT was only slightly correlated but SDRT was highly correlated with MRT. In contrast to substantial practice gains for MRT, SDRT and E%, only CVRT proved to be stable. In conclusion, CVRT appears to be a potential index for assessing performance variability: it is reliable for the overall test, only moderately correlated with speed, and virtually not affected by practice. However, before applying CVRT in practical assessment settings, additional research is required to elucidate the impact of task-specific factors on the reliability of this performance measure.

Key words: concentration; sustained attention; intraindividual variability; coefficient of variation; reliability; response time


Hagen C. Flehmig
Psychologisches Institut II
Technische Universität Dresden
Zellescher Weg 10
01062 Dresden
Germany
Phone: +49-351-463-34004
hagen.flehmig@tu-dresden.de

top


The resilient type: ‘Simply the best’ or merely an artifact of social desirability?
MARCUS ROTH & PHILIPP YORCK HERZBERG

Abstract
Several findings within typological research give rise to the suspicion that Big-Five based prototypes are highly influenced by social desirability. In particular concerning the resilient type, these findings tend to suggest, that this personality type is actually nothing more than an artifact of a socially desirable response bias. To test this assumption, the degree of SD influence on the NEO-types was compared with the degree of influence on the NEO-dimensions in two studies N1 = 326; N2 = 119). We found that the Big-Five types, in particular the resilient type, were strongly influenced by social desirability - but not to a greater degree than the NEO-dimensions upon which the types are based. So, the SD-affection of types merely reflects the SD-affection of the dimensions upon which the types are based. Future implications for typological prototype-research are discussed in light of these results.

Key words: typology, big five, social desirability, resilientness


PD Dr. Marcus Roth
Universität Leipzig
Institut für Psychologie II
Seeburgstraße 14-20
04103 Leipzig
Germany
Öffnet ein Fenster zum Versenden einer E-Mailmroth@uni-leipzig.de

Dr. Philipp Yorck Herzberg
Universitätsklinikum Leipzig AöR
Abteilung für Medizinische Psychologie und Medizinische Soziologie
Philipp-Rosenthal-Straße 55
04103 Leipzig
Germany
Öffnet ein Fenster zum Versenden einer E-Mailherzberg@medizin.uni-leipzig.de

top


Convergence of conventional and behavior-based measures: Towards a multimethod approach in the assessment of vocational interests
RENÉ T. PROYER

Abstract
The main aim of this study was to evaluate the usefulness of different techniques for the assessment of vocational interests. In an empirical study (n = 264) a questionnaire, a nonverbal test, several objective personality tests, and a semi-projective test were applied in one single session in a computerized setting. All tests enable the assessment of vocational interests with regard to the theory of vocational interests by Holland (1997). Results showed that highest correlations to a Holland-type questionnaire were found for the questionnaire and the nonverbal test. In general, the objective personality tests were less homogenous and showed lower correlations to questionnaires. Nevertheless, all different measures showed potential for the assessment of vocational interests. Improvements in the test material and scoring methods of the newly constructed tests are discussed and a model for the combined use of different assessment methods is presented. Future research directions and a discussion on the role of a multimethod assessment strategy in practice are given.

Key words: vocational interests; RIASEC; objective personality tests; assessment of vocational interests


Dr. René T. Proyer
Section on Personality and Assessment
Department of Psychology
University of Zurich
Binzmühlestrasse 14/7
8050 Zurich
Switzerland

br />Öffnet ein Fenster zum Versenden einer E-Mailr.proyer@psychologie.uzh.ch

top


» Psychological Test and Assessment Modeling Online-Shop...





alttext