• English
    • svenska
  • English 
    • English
    • svenska
  • Login
View Item 
  •   Home
  • Student essays / Studentuppsatser
  • Department of education, communication and learning / Institutionen för pedagogik, kommunikation och lärande (IPKL)
  • Masteruppsatser (IPKL)
  • View Item
  •   Home
  • Student essays / Studentuppsatser
  • Department of education, communication and learning / Institutionen för pedagogik, kommunikation och lärande (IPKL)
  • Masteruppsatser (IPKL)
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

ON THE COMPARABILITY OF PAPER-BASED AND COMPUTER-BASED ENGLISH READING COMPREHENSION TESTS A Study of High-Stakes English Reading Assessment

Abstract
The Swedish National Test will be digitalised by 2022 according to the government bill “Prop.2017/18:14. This study addressed the transition of high-stakes English reading comprehension tests from paper-based to computer-based test and examined whether a new delivery mode from an assessment and a measurement point of view can be regarded as equivalent to the traditional paper-based test. Theory: The study was based on Classical test theory and Language testing theory. Method: The empirical basis for this quasi-experimental study consisted of data from large-scale pilot studies carried out for future English high-stakes tests for year 9 in Swedish compulsory school. The database included possibilities to address the question of comparability in a quasi-experimental study where data from schools which were administered the test on computers can be compared to schools which were administered the test on paper. A total of 1275 English students in year 9 participated in the study. As the groups participating were not necessarily equal and comparable before this study, variables to control for some of the initial differences were added. An independent T-test indicated that the groups were comparable in terms of grade in English. Results: On test items requiring constructed responses students in the computer-based test group gave on average lengthier responses compared to the paper-based group, but the difference did not result in better performance with higher scores. Regression analyses revealed that the delivery mode had no general effect on the scores. Results indicated however that boys’ scores increased when the test was computer-based, more so on the shorter reading comprehension tasks than the extensive reading comprehension task. Findings in this study are discussed in connection to future research.
Degree
Student essay
URI
http://hdl.handle.net/2077/58003
Collections
  • Masteruppsatser (IPKL)
View/Open
gupea_2077_58003_1.pdf (2.225Mb)
Date
2018-10-26
Author
Asp, Lena
Keywords
Computer-based test
paper-based test
score comparability
language assessment
high-stakes testing
delivery mode
Series/Report no.
HT18-2920-001-PDA699
Language
eng
Metadata
Show full item record

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV
 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV