The Usefulness of the CEFR in the Investigation of Test Versions Content Equivalence
Authors | |
---|---|
Year of publication | 2018 |
Type | Article in Proceedings |
Conference | PROCEEDINGS OF THE 11TH INNOVATION IN LANGUAGE LEARNING INTERNATIONAL CONFERENCE |
MU Faculty or unit | |
Citation | |
Keywords | expert judgement; CEFR descriptors; content and construct equivalence; test versions equivalence |
Description | This paper presents one part of the PhD research realized within the broader framework of test versions equivalence in high-stakes testing context, particularly in the Slovak upper-secondary school leaving exam in English at B1 level (Maturita). The objectives of the research project are to investigate the extent of equivalence of the test versions used between 2012 and 2015 and, on the basis of the results, to propose what processes could be implemented in the test development with the objective to reach test version equivalence. In this paper, we focus on the use of the CEFR as a tool for the investigation of content and construct equivalence as the Maturita exam claims to be linked to the B1 CEFR level. Content structure analysis using expert judgement and item-descriptor matching method were conducted and the agreement coefficients were calculated. Preliminary findings indicate that CEFR descriptors can be problematic for describing the test content and construct at a discrete, detailed level, as the descriptors differ in terms of completeness, structure and specificity level. The use of CEFR-based descriptive models is also problematized by the fact that the characteristics of test items are seen as the result of the interaction among test takers' proficiency, design of the item, expert judges' characteristics and their internalization of the judgement task. The key findings of the analysis and the usefulness of the CEFR for this purpose will be discussed in light of the whole research project and possible further steps will be presented. |
Related projects: |