Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University

  • L Tomak
  • Y Bek
Keywords: Classical test theory, item analysis, item difficulty, item discrimination, item response theory, reliability

Abstract

Background: Item analysis is an effective method in the evaluation of multiple‑choice achievement tests. This study aimed to compare the classical and the latent class models used in item analysis, as well as their efficacy in the evaluation of the examinations of the medical faculty.
Materials and Methods: The achievement tests in the medical faculty were evaluated using different methods. The two methods used were the classical and the latent class models. Among the classical methods, Cronbach’s alpha, split half methods, item discrimination, and item difficulty was investigated. On the other hand, various models of item response theory (IRT) and their statistics were compared in the group of latent class methods.
Results: Reliability statistics had values above 0.87. Item no. 7 was found easy, item no. 45 difficult and item no. 64 fairly difficult according to the evaluations done by classical and item response theories. In terms of item discrimination, item no. 45 had lower, item no. 7 had middle and item no. 64 had high discrimination levels. The distribution graph shows that personal abilities are good enough to tick the correct choice.
Conclusion: In this study, similar results were obtained by classical and latent methods. IRT can be considered perfect at a mathematical level, and if its assumptions are satisfied, it can easily perform assessments and measurements for most types of complex problems. Classical theory is easy to understand and to apply, while IRT is, on the contrary, sometimes rather difficult to understand and to implement.

Key words: Classical test theory, item analysis, item difficulty, item discrimination, item response theory, reliability

Section
Articles

Journal Identifiers


eISSN: 1119-3077