Sessions / Scales / Measurement
Improving the BETs: Searching for Validity and Online Feasibility #1981
This poster outlines the development of a streaming and CEFR level-assessing end-of-course test for first- and second-year students. Considering the issues faced in establishing validity, it describes the three-stage Rasch, Excel and text analysis process that has evolved to now form the basis of our annual review and rewriting procedure. It also addresses problems faced in further streamlining and adapting from a paper test to an online test fit for COVID times.
Linguaskill, the AI Powered English Test Developed by Cambridge English #2320
Linguaskill tests the real-life language needed for an academic environment with in-depth, accurate, individual and group reporting aligned with international standards. This means you can make confident placement and admissions decisions, and students have the skills they need for academic success and enhanced employability. The remote proctoring solutions and extensive learning solutions have helped learners to achieve their goals amid the pandemic. Find out more about Linguaskill and such solutions in our session.
Successful Development of a Placement Test Appropriate to Context #2223
This presentation begins with concerns about commercial placement tests that led the researchers to develop their own. The presenters demonstrate steps to determining item types, producing them, trialing them, and refining the test instrument. Design tips for making test administration run smoothly are also shared. Statistical analyses supporting validity and reliability are presented. Finally, challenges and how they were overcome will be discussed. Anyone considering developing their own placement test may benefit from this presentation.
Formulaic Language in English for Academic Purposes Textbooks #2273
In this study, I examined formulaic language that appears in English for Academic Purposes (EAP) textbooks. I classified these expressions according to their functions, such as disagreeing or asking questions. I also compared the most frequent expressions from this textbook corpus to a corpus of academic spoken English. Attendees will also hear a discussion of the larger differences between language in academic textbooks and naturally occurring language.
Benefits of Conducting Mixed-Methods L2 Writing Research: An Exemplar Study #2164
This paper reviews studies that employed mixed-methods research (MMR) designs in L2 writing research. It discusses what MMR is, how it can be a self-standing paradigm, and what makes it a distinctive paradigm. It then explores benefits and challenges of conducting MMR in L2 writing research and introduces and reflects on a recent MMR study on L2 writing conferences in a university setting.
Latest Information on the TOEFL Test #2388
TOEFL®テストは世界中で3,500万人以上が受験したもっとも実績あるアカデミックテストです。昨年以来新型コロナウイルス感染が広がる中、TOEFL®テストは自宅受験の実施など様々な取り組みを行いました。新テストであるTOEFL® Essentials™テストの紹介も含め、最新情報をご説明します。
Using CEFR/CV Illustrators to Navigate Meaning in a Mixed-Level CLIL Class #2221
The presenter explores results after the first cycle of an action research project which applies the illustrative scales of the CEFR/CV to explain ways students in a mixed-level CLIL classroom navigate meaning from texts and lectures, especially when the level of the materials may be above their level. Although efforts are made to match student abilities, mixed-levels, as well as the nature of university-level material, often means materials may be above student level.
Peer Assessments: Which Is Better, a Likert-Type or a Rubric? #2033
Students in two classes at the same Japanese university conducted peer assessments on their peers’ presentations. In one class, the students utilized a Likert-type scale assessment sheet with Categories 1–4. In the other class, the students utilized a rubric assessment sheet where qualitative definitions of evaluative items were written at particular levels of achievement. These data were compared, using a multifaceted Rasch analysis computer program.
Using Technology to Assess the Interactive Skills in a Speaking Test #2146
This presentation will focus on an innovative face-to-face testing system that incorporates a variety of digital prompts to assess students utilizing a rubric based on CEFR-J can-do statements. This speaking test is designed specifically for Japanese learners of English and assesses their ability to speak and interact. Technology was used to streamline the test by using digital delivery for images and video in conjunction with an online assessment scoring input system.
Developing an Online EFL Reading Proficiency Test #2156
This presentation will discuss the development and use of a short, web-based lexical discrimination, phonological and orthographic skill, and vocabulary test to help a university English department assign students into levels and identify students with potential reading weaknesses. Practical and theoretical issues will be discussed, and the correlation of various parts of the test to the TOEFL ITP test and student course performance will be reported.
A CEFR Alignment Project: Instructor Adaptations and Implementation #2074
This presentation is an update on a project to align existing English communication courses with the Common European Framework of Reference for Languages (CEFR). The presenters detail the project’s progress as it moves to a practical implementation stage. In this stage, students are interviewed, while can-do statements are modified and employed in the classroom as well as introduced in the self-access center. The voices of students and teachers are included throughout.
English Proficiency Change in an EFL Program Over 20 Years #2177
This presentation will describe a longitudinal study examining the performance of a Japanese university English as a roreign language (EFL) program over a 20-year period. Time-series analyses were conducted using TOEFL ITP results for 20 student cohorts to investigate emerging English proficiency trends. The results indicated that specific institutional events, as well as larger population trends impacting Japanese universities, led to gradual shifts in program student demographics, which contributed to changes in proficiency patterns.
Collective Assessment Framework for International Learning #2236
This paper aims to examine the feasibility of the collective evaluating method for the learning outcome of English learners in intercultural virtual exchange. As intercultural exchange with multiple partner institutions requires a common ground for quality assurance of learning outcomes, we developed a common framework of reference for the learning outcomes on “language skills” interconnected with other required skills. This paper will share the application of the framework to the activities in international learning.
Developing a Rating Scale for Assessing Interactional Competence #2181
Recently, there has been a considerable increase in interest in the teaching and assessment of interactional competence. However, its key to ensure that any assessment rubric functions effectively and reliably. This presentation reports on the process of rubric development that combined quantitative analysis using Many Facet Rasch Measurement with qualitative feedback from raters. Based on these analyses, recommendations are given for the development of well-functioning, reliable rubrics for assessing interactional competence and speaking skills.
The Use of First-Person Pronoun “We” in Science Research Articles #2083
In this presentation, how and where the first-person pronoun “we” is used in research articles written in English by authors of different linguistic backgrounds will be discussed. A research article corpus was created to analyze the uses of “we” in different sections of research articles. The metatextual uses of “we” were also examined using the verbs that collocate with “we.” The results show that “we” is widely used throughout science research articles.
Holistic and Analytic Ratings in Peer Assessment of Presentations #2027
This study examined student peer raters’ use of holistic rating scales in relation to analytic rating scales in peer assessments of EFL oral presentations. Japanese university students evaluated their classmates’ presentations using both holistic and analytic rating scales. Using a series of statistical analyses, including the many-facet Rasch measurement analysis, the researcher will discuss the role of analytic rating criteria in student peer raters’ holistic ratings.
Preliminary Results of a Presenting in a Foreign Language Anxiety Scale #2053
This presentation will detail the development of a Presenting in a Foreign Language Anxiety Scale (PFLAS) designed to measure the anxiety university students experience when making presentations in English. The results of an initial administration of the survey will be given, student responses to the scale and their significance will be discussed, and finally the next steps in the research process will be considered.
Examining the Equivalency of Picture-Based Speaking Tasks #2087
This study explores the equivalency of seven picture-based speaking tasks by comparing the oral performance they elicit from Japanese L2 learners. Despite controlling for variables across story length, sequential structure, and storyline complexity, the results were similar only in terms of fluency and not complexity, accuracy, or lexis. The study highlights the importance of piloting testing materials for conducting experimental research using a pretest-posttest design.