Abstract:
In this talk I contend that the validation process, arguably the most important process in educational measurement, has served as one measurement’s primary weapons of racist oppression and marginalization. Cushman (2016) writes that “validity as a tenet is used to claim, gather, and justify results with so many performance and survey tools, it has now more than ever been used to routinize inequities as naturalized parts of systems of educational access…” Indeed, the measurement community, largely, continues to rely on approaches to validation that – instead of seeking justice for marginalized communities – treats the sociocultural identities of these test takers as barriers of inferiority to be mitigated outside of the assessment design process. If we fail to recognize the ongoing realities of social oppression in our validation processes, the use of validity arguments simply becomes another racist tool, reproducing – rather than disrupting systems of oppression. In this talk, I discuss some of the ways in which current approaches to developing literacy assessments (and the corresponding validation processes for their use) serve to marginalize, or completely erase, the sociocultural identities and realities of racially minoritized populations. I argue for a justice-oriented, antiracist approach– rooted in critical theory – that seeks to actively disrupt our current practices rooted in the white supremacist hegemony. With this justice-oriented lens, the validation process asks the question: To what extent do the characteristics of the assessment, the assessment design/development process and/or the inferences drawn from the assessment provide evidence of antiracism? Bio: Dr. Jennifer Randall, formerly a public school teacher, is the Dunn Family Professor of Psychometrics and Test Development at the University of Michigan and President of the Center for Measurement Justice. Professor Randall’s work explores the ways in which traditional approaches to assessment design serve to marginalize racially and ethnically minoritized learners and how culturally sustaining and unapologetically antiracist assessment development can mitigate that harm. She advocates for dramatic changes in the Who, Why, What, and How of assessment design and implementation. |
Abstract:
Since the commercialization of language testing in the 1960’s, the concerns of the language testing industry have mostly, albeit unintentionally, had a strong influence over the language assessment research agenda, often by means of company research, research grants offered to independent scholars, and other incentives. As a result, considerable research attention has been devoted to the development and validation of standardized exams designed mostly for the purpose of making high-stakes decisions about academic readiness. This attention is not unexpected since selection decisions or decisions regarding the issuance of certificates need to be publicly justified as theoretically viable, psychometrically sound, fair, and just measures of the test’s measurement claims in order to earn the public’s trust. Given the fact that high-stakes proficiency testing is not only important to society, but can also be theoretically riveting, the attention to this research is understandable. Nonetheless, the focus on standardized proficiency exams, measuring many independent forms of behavior, has often overshadowed, and may have even unintentionally discouraged or devalued research in other institutional contexts. Let’s take, for example, the assessment of language use in the context of a collaborative problems-solving event in a chemistry lab. In this event, “language is not a form of independent behavior in isolation from other behaviors and forms of organization” (Davies, 1990). Rather, language use, teaching, learning, and assessment are intrinsically related. Namely, team members and other artifacts provide information about the problem (teaching). The team members process that information, hold it in memory, and generate responses (socio-cognition/learning). They then use this information to generate more responses (assessment). According to Davies (1990, p. 56), in this situation, language is used in a social and historical setting, produced by and indicative of individual processes, and is contingent upon prior and developing knowledge. Assessment then in this context allows teachers to judge how team members marshal, develop, and deploy, over time, a set of integrated topical and linguistic resources in order to perform a sequence of valued, real-world tasks embedded within the mediated engagements and social-interactional practices of this particular community (Purpura, 2021). So why would we ever think the performance-based inferences and subsequent learning decisions derived from this context would be any less important than those from proficiency testing contexts? In the current talk, I would ultimately like to argue that the field of language assessment would benefit from research in a wider range of institutional contexts than we currently have. I would especially like to encourage research in contexts, where L2 assessments are organized around real-world competencies (e.g., collaborative problem solving) and have a learning orientation. To illustrate these points, I will describe how integrated content and language (ICL) assessments were conceptualized and used within the context of a professional development (PD) initiative designed to strengthen integrated content and English language instruction for subject matter instructors, language teacher educators, and ESP teachers in the Algerian higher education context. The theoretical framework underlying the 60-hour PD initiative and the assessment activities across the program was based on both a competency-based approach to education (Pérez-Cañada, 2013) and on a learning-oriented appAbstract:roach (Purpura & Turner, 2018) to instruction and assessment. The assessments were used to issue a learning-oriented certificate of completion. Graduates were encouraged to teach their courses in English and to mentor others in the development of ICL practices—these activities fully align with the Ministry of Higher Education’s 2017 English language policy reform initiatives. Davies, A. (1990). Principles of Language Testing. Cambridge, MA: Basil Blackwell. Pérez Cañado, M. L., (2013). Competency-based Language Teaching in Higher Education. New York, NY: Springer Science+Business Media Dordrecht. Purpura, J. E. (2021). A Rationale for using a scenario-based assessment to measure competency-based, situated second and foreign language proficiency. In M. Masperi, C. Cervini, & Y. Bardière (Eds.), Évaluation des acquisitions langagières: Du formatif au certificatif. MediAzioni 32: A54-A96, http://www.mediazioni.sitlec.unibo.it. Purpura, J. E. & Turner, C. E. (2018). Using Learning-Oriented Assessment in Test Development (Invited workshop). Language Testing Research Colloquium, Auckland, New Zealand. Bio: Dr. Jim Purpura, a past president of ILTA (2007-08), has presented and published widely on grammar assessment, meaning assessment, learning-oriented language assessment, and the cognitive bases of assessment. Currently he is using a learning-oriented assessment framework to research factors contributing to “situated" L2 proficiency in the context of scenario-based second and foreign language assessment. He is particularly interested in examining how cross-linguistic differences play out in scenario-based assessments. Professor Purpura has served as Editor-in-Chief of Language Assessment Quarterly and series co-editor of both New Perspectives in Language Assessment (Routledge) and Innovations in Language Learning and Assessment at ETS (Routledge). He has served on numerous expert and advisory panels including the TOEFL Committee of Examiners (ETS), the Association of Language Testing in Europe (ALTE) expert members group, the National Academy of Sciences, Engineering and Medicine’s Committee on Foreign Language Assessment, and on the U.S. Defense Language Testing Advisory Panel. He was a Fulbright Scholar in Italy in 2017. |
This session will consist of live and video tributes from a number of Liz’s friends and colleagues who will recount their memories of working with her in various parts of the world. Organized by Vivien Berry and Charles Stansfield (who will also chair the session), the running order will be as follows:
Charles Stansfield will delve back into the 1980s to recall conversations with Liz, whom he perceived then, as now, to be the most knowledgeable person in the field on the testing of writing skills. He will be followed by Claudia Harsch, who will talk about Liz’s influence on her own work and on the field of writing assessment and testing in general. Yan Jin will then talk about Liz’s work in China as a senior consultant to the College English Test (CET) and about the book she co-edited with Liz during the last few years of her life. Next, Barry O’Sullivan will discuss working with Liz and Tom Lumley at the Hong Kong Polytechnic University on the English version of the Graduate Students’ Language Proficiency Assessment (GSLPA) and other recollections from his time there. Vivien Berry (on video) will then briefly relate an anecdote from the 2002 LTRC in Hong Kong which Liz chaired and for which she organized the banquet. Emma Bruce will recall her experience of being Liz’s last PhD student, beginning in Hong Kong and finishing in England. To conclude, Antony Green will present the final tribute, discussing his and Liz’s work at the Centre for Research in English Language Learning and Assessment (CRELLA), University of Bedfordshire, which was Liz’s final professional appointment before her retirement in 2020.
___________________________________________________________________________________________________________________________________________
Professor Liz Hamp-Lyons is the the winner of the 2022 Cambridge/ILTA Distinguished Achievement Award. This posthumous award is given in recognition of Professor Hamp-Lyons, who sadly passed away on March 10, 2022, for the huge impact she has made on the quality and equality of language testing and assessment.
Over the past 40 years, Liz held positions in different parts of the world, including Greece, Iran, Malaysia, the US, the UK, Australia, and Hong Kong. Until recently, she was a visiting professor at the Centre for Research in English Language Learning and Assessment (CRELLA) at the University of Bedfordshire—a world renowned research centre in language testing and assessment.
Through her research and practice over the decades, Liz made an outstanding contribution to the field of language testing and assessment. Her research was focused on writing assessment, academic language assessment and a range of non-traditional assessments including portfolio assessment, dynamic/interactive assessment, and learning-oriented assessment. She had written extensively and made numerous presentations at international, national and regional conferences. Like many language testing professionals of her times, Liz started her career as an English teacher. She furthered her education through a Master program in English language education. Her MA project was about the development of children’s writing ability. Her first book, titled Research Matters and published by Newbury House in 1984, was produced from the teaching materials she developed for an EFL writing course. Her PHD study with Alan Davies at the University of Edinburgh was on re-designing and validating the rating scales for ELTS writing, which kindled her life-long interest in rating writing performances. She edited journal Assessing Writing from 2002 to 2016, nurturing it into a tier-1 international journal. The journal is currently drafting terms of reference for a best paper award that will be awarded each year in Liz’s name, and in her memory, according to a recent editorial (Volume 52). Liz also kept her interest in English for Academic Purposes over the years. She was the founding editor of the Journal of English for Academic Purposes in 2002, which she coedited until 2015. A best paper award was set up by the journal in recognition of her service to the field of EAP.
Over the decades, Liz provided extensive and inspirational leadership to the language testing community. She was a founding member of ILTA in 1992 and served as ILTA Vice-President (2002) and then President in 2003. She cochaired the LTRC in San Francisco in 1990 and chaired the LTRC in Hong Kong in 2002. She also served as a member of the ILTA Nominating Committee, the Code of Ethics Committee, and the By-laws Committee. Liz remained an active member of the language testing community till very recently. She also consulted on projects in a wide range of countries or regions such as Australia, Greece, Hong Kong, mainland China, Malaysia, Oman, Thailand, the UAE, the UK and the US. She had a strong interest in language testing and assessment in the Asian contexts. From 1999 to 2003, she was Director of the Asian Centre for Language Assessment Research at Hong Kong Polytechnic University. Since 2005, she had been a Guest Professor in International Studies at Shanghai Jiao Tong University. She became Distinguished Professor of Education and Languages at Hong Kong Open University in 2016. The last book she co-edited was on writing assessment for Chinese learners of English, which was published by Springer in April 2022, a month after she passed away.
Liz is also admired for her commitment to equitable assessment, tremendous generosity and untiring efforts to mentor students and young scholars who, in her words, make up our future. She challenged the consequences of language assessments and believed in empowering test takers and making assessments of all kinds fairer. She was courageous in standing her ground based on principles of fairness and a sense of the right thing to do. In this respect, she was a strong feminist voice in our field when female role models were rare, and she was a champion of the less privileged. She was concerned about her colleagues and her students and their tests, questions, and research projects. Over the years, she was a frequent contributor to discussions on LTEST-L where she always responded considerately and thoughtfully to questions including those posed by people who are new to the field.
Following Liz’s death in March 2022, dozens of tributes were posted on the LTEST-L by academics at all levels around the world. We quote a few of them to illustrate the esteem in which the community holds her:
- Liz was a founder and keen networker in our field, and we are all the beneficiaries of her work.
- She was such a wonderful and dedicated person, with an open mind to all her colleagues, whether beginners or established but a sharp critic of anything from anyone that she found unfitting or lacking sufficient scientific underpinning.
- The field at large is stronger because of her insightful and expansive contributions. Her leadership was evident through her diverse roles in our Association, her authorship of innumerable publications, and her positions in institutions around the world.
- She cared deeply about language testing and helped mold the profession into what it is today.
- She was so full of life – it was always fun to spend time with her at conferences.
- She spoke her mind courageously and was just as passionate in helping people when they needed it. … We have all lost a strong voice, one to be remembered.