Development of computerized adaptive testing to measure students' logical thinking skills in science learning

Main Article Content

Fitri Wulandari, Samsul Hadi, Haryanto

Abstract

The advancement of information technology had changed conventional methods of testing. Paper based testing and evaluation began to decline due to it takes longer time to process and providing feedback. This study aims to develop a computer-based adaptive test (CAT) to measure students' logical thinking skills in science learning. The CAT development process using the waterfall model covers four main activities, namely: (1) analysis, (2) design, (3) implementation, and (4) testing. The required test material is standardized through a series of trials and item analysis with item response theory (IRT) to obtain item parameters and characteristics which are then used as a database item bank. The procedure for selecting test items uses a fuzzy algorithm using the parameters of item difficulty, item difference power and the response of the participants' answers as input. Based on the results of the system testing, each student receives different test items according to their ability level and the difficulty level of the items received by students according to the characteristics of the item information. Validation Feasibility testing shows the highest grand mean value for student respondents for the use performance aspect was 4.5. This indicated that the result of performance aspects test had a fairly high consistency. The grand mean average value for all aspects, which was above 4, indicated that the development of CAT to measure students' logical thinking skills in science learning is feasible


 

Article Details

Section
Articles