We don’t have a fraud problem, we have a language assessment literacy problem

4–7 minutes

read

I wrote a post recently on the Canadian government’s recent decision to impose a language testing requirements to apply for a post-graduate work permit. In the week following IRCC’s announcement, LinkedIn was abuzz with commentary from the international education crowd about the measures.

In the discussions on the issue on LinkedIn and elsewhere, there were no shortage of comments along the lines of:

“I disagree with the required levels, but think this measure is necessary to combat the rampant language testing fraud out there.”

“There’s so much testing fraud. I have had several students from [insert country], they have a 6.5 on the IELTS and they get here and can’t even have a conversation. They obviously cheated on the IELTS.”

I disagree completely!

Yes, there are people who are admitted to institutions and who struggle in terms of language skills once they begin their studies. (More on that below.)

However, although there is no doubt a small number of individuals here and there that engage in fraudulent behaviour, I’m here to argue that we don’t have a widespread language testing fraud problem in Canada. We have an assessment literacy problem at our institutions, with an overreliance on a single language test score to do all the heavy lifting in ensuring the readiness of students who use EAL for academic study.

Think about it from the student’s perspective.

Language proficiency requirements for admission to most Canadian institutions consist of a single standardized test score. What’s communicated by that is the message that as long as your achieve that required score, you’ll be sufficiently prepared to be successful at university, from a language perspective.

We ask students to do one thing—complete a standardized exam which tests a very narrow set of language skills in a sterile, controlled environment. And so students go ahead and prepare to do that very well.

And then they get here and sometimes we ask them to do completely different things, like participate in a class debate, make small talk in a high stakes situation, do public speaking—the types of complex, multi-faceted skills and tasks that aren’t tested on standardized exams and that many folks have never done before.

Then we get mad if someone can’t do it well, and call it testing fraud.

But it’s not fraud to prepare for an exam and do well on it, and then not to be able to do a set of completely different behaviours successfully.

Imagine you’re preparing to get your driver’s license. You take a prep course. You practice on your Toyota Camry for your driving exam, and you pass with flying colours and get your license!

Then I ask you to get behind the wheel of an 18-wheeler and perform some maneuvres. Parallel parking perhaps? (I shudder to think how I’d do on that.) If you don’t do well with the 18-wheeler, that doesn’t mean that you committed fraud to pass your driving test. You passed that test fair and square in your Toyota. It’s just a different test, and one that’s limited. It simply didn’t prepare you for driving a big rig!

The IELTS (or TOEFL, or Duolingo English Test, or Pearson Academic, etc.) is the driving test in a Toyota Camry, and using language in complex, real-life ways in academia is like driving an 18-wheeler.

So when someone deems this “testing fraud”, it shows a lack of understanding of the shortcomings of standardized testing. In other words, a lack of assessment literacy: knowledge of of what tests can and can’t evaluate, what a test score means, the differences between tests, and what we can reasonably infer from a test score.

Angela Clark argues in an (excellent) book chapter on the issue of language testing fraud in Canada, “[…] relying on a single language proficiency test score to determine an individual’s readiness [for academic study] is problematic, and also problematic is the lack of related academic research and data to help guide admissions decision-making.”

So what can we do?

I think increasing assessment literacy (or “possessing ‘knowledge, skills, and understanding of assessment principles and practice’” (Taylor, 2009, p. 24 as cited by Clark, 2023) is key to this issue.

Institutions should do more to raise the assessment literacy amongst admissions stakeholders, including not just more familiarity with the exams themselves, but tapping into institutional expertise around language education, users of EAL, and the principles of language development. Opportunities abound for training and collaboration between admissions staff and English language centres and/or faculties of education or linguistics.

Clark urges more more research be done on the predictive validity (i.e. the connection between admission test scores and future academic success at university) of the major standardized tests. Predictive validity research is very complex and often inconclusive; nonetheless, research conducted by and about the Canadian contexts and our institutions is a necessary part of the puzzle, to complement the research about the major standardized tests carried by the testing companies themselves.

Institutions might also look at ways to take all of their eggs out of one basket by adding additional admissions criteria in addition to a test score, such as interviews, portfolios, letters of recommendation, etc.).

But we have to stop thinking that tests are the answer to everything. We also have to shift away from considering language proficiency only at admissions.

As I wrote in 2019, an institutional approach to curricular and student-service structures to support all students’ academic language and literacy development both pre- and post admission is crucial. For example, first-year required courses where students can build off their existing language proficiency to develop the necessary real-life academic skills they need to be successful in their studies.

More teaching, less standardized testing. Or at least, more teaching in addition to all the standardized testing.

In other words, understanding that few students are going to come to and be able to drive an 18-wheeler right off the bat. So providing a place within the curriculum for them to transition from the Camry to the big-rig truck.

Source cited:

Clark, A. (2023). Examining the Problem of Fraudulent English Test Scores: What Can Canadian Higher Education Institutions Learn?. In: Eaton, S.E., Carmichael, J.J., Pethrick, H. (eds) Fake Degrees and Fraudulent Credentials in Higher Education . Ethics and Integrity in Educational Contexts, vol 5. Springer. https://doi.org/10.1007/978-3-031-21796-8_9

One response to “We don’t have a fraud problem, we have a language assessment literacy problem”

  1. […] about international students and language-testing fraud in the Anglophone world in the media (here, here and here, as just a few examples), why are we taking away alternative ways to demonstrate […]

Leave a comment