The Assessment of Emergent Bilinguals: Supporting English Language Learners
240The Assessment of Emergent Bilinguals: Supporting English Language Learners
240Hardcover
-
SHIP THIS ITEMTemporarily Out of Stock Online
-
PICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
Related collections and offers
Overview
Product Details
ISBN-13: | 9781783097265 |
---|---|
Publisher: | Multilingual Matters Ltd. |
Publication date: | 02/28/2017 |
Pages: | 240 |
Product dimensions: | 7.20(w) x 9.80(h) x 0.80(d) |
About the Author
Read an Excerpt
The Assessment of Emergent Bilinguals
Supporting English Language Learners
By Kate Mahoney
Multilingual Matters
Copyright © 2017 Kate MahoneyAll rights reserved.
ISBN: 978-1-78309-728-9
CHAPTER 1
A Decision-Making Process called PUMI
Key Vocabulary
Assessment or testing
Assessment lens of promise
Assessment lens of deficit
Translanguaging
PUMI (purpose, use, method and instrument)
PUMI Connection: Purpose, Use, Method, Instrument
This chapter introduces the concept of PUMI – 'purpose, use, method, instrument' – which will be used in each chapter for the remainder of the book. PUMI is a decision-making process to help stakeholders make better decisions about assessment for emergent bilingual (EB) students. The author recommends that when you are unsure about what questions to ask and what is important, start asking PUMI questions. What is the purpose? How will the results be used? What is the best method? What is the best instrument? Finding the answers and understanding PUMI help slow down the process of assessment and lead us to using fewer and better assessments.
This foundational textbook has three primary objectives. First, it helps teachers and administrators understand the challenges with assessment and accountability for EB students that dominate the field today. Second, it prepares teachers, administrators and leadership teams to make decisions about how to use and select appropriate assessments for EBs. Third, this book prepares educators to advocate on behalf of EBs in regard to appropriate test-use policies and practices.
It's a Continuum: Promise to Deficit
Valid assessment for EBs is a complex scientific challenge, especially in a monolingual schooling context. The various approaches to assessing EBs in the field today draw on contrasting views of assessment and bilingualism. This section briefly reviews a continuum of views, followed by guiding principles that teachers and administrators can draw on as they make practical decisions about assessing EBs.
There are many ways to view the assessment of EBs, and these can be best understood as a continuum – a wide range of approaches with extremes on either end – ranging from deficit to promise. Promising approaches to assessment highlight what the student knows and can do relative to multiple measures; on the other hand, deficit approaches highlight what the student doesn't know, usually relative to one measure. What is especially challenging is whether and how educators can look at EBs through a lens of promise within an accountability system focused on what children cannot do (Figure 1.1 and Table 1.1). Understanding this continuum will prepare the ground for better EB assessment policy, practice and advocacy.
The lens of promise is grounded in the ideas of dynamic bilingualism (García, 2009) and sociocultural assessment (Stefanakis, 1999), and is typically used in assessment courses and popular textbooks to guide educators in how to assess (and instruct) EBs within a meaningful and culturally relevant context. However, more often than not, and in contrast to a promising lens, deficit views of assessment dominate the policies and accountability system under which educators must perform. Deficit and promise are strikingly different, and educators are required to negotiate them in a public-school setting in order to maintain good evaluations of their own teaching and do what is best for students. These two views are presented here as a continuum; that is, one end of the continuum differs extremely from the other end, but points near one another on the continuum may not be that noticeably different. These two views are not a dichotomy – mutually exclusive of one another, where educators must choose one view or the other. What is most common in schools today is a mix of both deficit and promising assessment approaches; state and district accountability systems are more often connected to looking for deficits or areas where children are lacking, whereas classroom assessment is more connected to the approach of promise. Usually, teachers negotiate both views. Table 1.1 illustrates in more detail both ends of the continuum.
Assessment vs. Testing
Oftentimes in conversations that take place in schools, the words assessment and testing are mistakenly used synonymously. Assessment is a much broader concept than testing and can be thought of generally as the use of information from various sources to make decisions about a student's future instruction/schooling. Testing, on the other hand, is a measuring instrument that produces information we use in assessment. A test can be thought of generally to be like a measuring instrument such as a measuring cup or a scale or a tape measure ... used to measure ideas to help make decisions in schools. The example given above is a very simplified view, just to point out the difference between assessment and testing. Measuring flour in a measuring cup is much easier than measuring language in a child emerging as bilingual.
Support for EBs in Measurement Community
It is important to note that stand-alone tests themselves are not bad. However, with the increase of testing and the high-stakes decisions made from test scores, most educators have become very frustrated with the way that tests scores are used. The frustration is usually with the test-use, not the test, and especially for groups of students like EBs. Many times, policymakers, legislators, school boards, etc., use test scores for EBs in wrong – or invalid – ways. Examples of this from current practice include using achievement test scores to judge language ability (wrong construct), using one test score only to reclassify an EB (never use one score to make a big decision) or using test scores from a test that EBs cannot read very well (not a fair measure). It is important to point out that the educational measurement community does not support these types of test-use practices, and they do in fact support promising practices for EBs.
The term 'measurement community' refers to measurement scientists who have studied educational measurement, topics such as fairness and validity, for several decades. The three organizations that dominate the science and use of test scores in the US are the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME). The study of (educational) measurement is basically the practice of assigning numbers to traits like achievement, language, interest, aptitude and intelligence. Oftentimes, measurement scientists design studies to try to validate whether those numbers really represent the trait and as a result, they make recommendations about how to use test scores; however, it is up to policymakers to write good policies about how to use test scores. This is usually where 'a disconnect' occurs, leading to deficit assessment practices.
A good example of the increased attention toward EBs from the measurement community is an important publication called The New Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014) where standards for fair test-use are made explicit for the first time in a separate chapter on 'Fairness in Testing'. Fairness, especially with subgroups like 'individuals with disabilities' and examinees classified as 'limited English proficient' received increased attention by the measurement community (AERA, APA, NCME, 2014). In this important document, fairness in testing is considered a central idea, as those responsible for test development are held to the standard of designing all steps of the testing process to be accessible to the widest possible range of individuals, removing barriers, such as English proficiency, that may create unfair comparisons or interpretations of test scores (AERA, APA, NCME, 2014). Details about standards for fair testing are presented, especially on topics such as validity, test design and accommodations. Unlike previous versions of these standards, the importance of testing EBs (and other groups who have been marginalized from fair test development and interpretation) in a fair way has become a central idea.
Home language is typically undervalued or ignored
Although federal, state and most district accountability systems focus primarily on the assessment of English, this book emphasizes the importance of recognizing bilingualism. A fundamental assumption of research and practice in the education of EBs is that the students' home language(s) is a resource to develop, not a problem to overcome. Promising assessment practices focus on how students really use language, in authentic ways. If the purpose is to measure content knowledge in history, then the assessment can be conducted in English, Spanish or a combination of two or more languages that is meaningful to the student and can better show what the child knows (see Figures 1.2 and 1.3 for examples of translanguaging in a content-area assessment).
Home Language Assessment
There are typically very few or no formal assessments available in schools for home language, which in this age of accountability leads to a de-emphasis of it (if it's not tested, it's not taught). Another aspect of this issue is accountability. The increase in accountability to measure and show gains in English that came with No Child Left Behind (NCLB) has caused schools – and some bilingual schools – to abandon instructional time for developing the home language because they are no longer held accountable for showing growth in it. And in some cases, schools may even abandon bilingual programs altogether. Kate Menken and Cristian Solorza (2014) studied the decline of bilingual education in New York City schools. Through qualitative research, they studied 10 city schools that have eliminated their bilingual programs in recent years and replaced them with English only programs. The authors found that testing and accountability (high accountability of English, little to none of Spanish) were used as justification and created a disincentive to serve EBs through bilingual education.
Also, many teachers refrain from additional tests beyond those mandated; they feel that their students are already burdened with too many tests. This concern is warranted because many times EBs are tested up to double the amount of non-EBs. If you take each test for EBs and add together the amount of instructional time lost to account for testing and teacher-absence due to scoring and training for testing, the 'cost of testing' for EBs becomes evident. Unfortunately, not to assess the home language is to leave out important information that could be used to inform instruction and build on students' strengths.
Common in schools today is the overwhelming dominance of relatively inflexible assessment practices with EBs. Most assessment and accountability systems focus exclusively on English, reflecting the assumption that English is the only relevant language when instructing and assessing. Common assessment practice with EBs tends to measure each language separately, or not assess the home language at all. For example, in the US, EBs are regularly tested in English using a mandated standardized test, and on a separate day with a separate instrument, perhaps (many times assessing the home language is missing or optional) they are also tested using a Spanish language proficiency test. Typically, the two results are combined to represent the child's bilingualism. In reality, this is considered to be a very narrow view that critically underestimates what the child can actually do as a bilingual person.
Four Guiding Assessment Principles in this Book
The following four guiding principles – both theoretical and practical – guide the work throughout this book:
Guiding Principle 1: Assessment practice for EBs is viewed through a lens of promise. This book assumes that bilingualism is an asset and that assessment methods should highlight this asset (not point to perceived deficits). Assessment is an interactive process that should be integrated into daily routines that occur in a culturally relevant environment (Celic & Seltzer, 2011: 13; Ladson-Billings, 1994). Instruction and assessment should be student centered, involving students and peers in the act of assessment (administering and scoring) and organizing what they know. Assessment should also be multifaceted, involve multiple culturally relevant perspectives and provide authentic and meaningful feedback to improve student learning, teachers' instructional practice and educational options in the classroom. This guiding principle largely draws from the assessment ideas of Evangeline Stefanakis (1999, 2003, 2011) and the author's own practical experience.
Guiding Principle 2: Only high-quality assessments are acceptable. High-quality assessment adheres to the following five standards: (1) clear objectives, (2) focused purpose, (3) proper method, (4) sound sampling and (5) accurate assessment free of bias and distortion. To violate any of these standards places students' academic well-being in jeopardy. This guiding principle draws from the work of Rick Stiggins and Jan Chappuis (2011).
Guiding Principle 3: Validity is a unified concept. Proof that assessment results are valid for EBs should be presented before assessments are used to make important decisions. The idea of validity should include the adequacy and appropriateness of inferences and actions, including social consequences, based on test scores or other modes of assessment. This guiding principle draws from the work of Samuel Messick (1989).
Guiding Principle 4: Translanguaging during assessment is important for EB students. For students who speak English and additional languages, translanguaging as a pedagogical practice can serve to validate their home language and cultural practices; plus, they can use their bilingualism in a bilingual context more openly. In assessment, if the purpose of the assessment is to assess content (math or science for example), then translanguaging in assessment will more validly show what students know. Ofelia García (2009) uses the term 'translanguaging' to describe the language practices of bilingual people (active) as opposed to the language of bilingual people (static). This new focus on dynamic language practices (languaging – translanguaging) represents a major shift in thinking for many researchers and practitioners in the field. This guiding principle draws from the work of Ofelia García (2009) and the practical work of the City University of New York-New York State Initiative for Emergent Bilinguals (CUNY NYSIEB) team.
Figures 1.2 and 1.3 show the use of translanguaging during social studies assessment. The purpose of these assessments was to measure social studies content. This helps students show more fully what they really know because they can use their full linguistic repertoire, which creates more opportunity for access to content and for assessment of a holistic picture of what they know as opposed to a fractional picture if students are limited to one language (Celic & Seltzer, 2013; García, 2009). Using translanguaging in assessment is good pedagogical practice for ENL, bilingual and mainstream teachers – everyone.
Together, these four guiding principles create a foundation for an appropriate model of assessment for EBs. Teachers and administrators can draw on these guiding principles as they make decisions about assessment. The remainder of this chapter introduces a practical and easy-to-remember decision-making process called PUMI that will help teachers adhere to the four guiding principles reviewed above.
PUMI (Purpose, Use, Method, Instrument): A Framework for Decision-Making
This section introduces the 'PUMI' framework, which teachers and administrators can use to make decisions about assessment for EBs. PUMI is an acronym for purpose, use, method and instrument. The PUMI framework is essentially a series of critical questions that educators need to ask in order to select the most appropriate method of assessment and improve the condition of assessment for EBs. Figure 1.4 represents the four major steps in the PUMI framework used throughout this book.
First, educators need to ask themselves and each other, 'What is the P: purpose of this assessment?' or, more pointedly, 'Why are we doing this?' For EBs, we often assess for the following purposes: to measure oral language, to measure achievement or to measure ELP development in content-based instruction. When answering the first question in PUMI, the answer usually begins with 'to measure ...' or 'to assess ...'. Every assessment is designed to have a purpose and to measure something, regardless of whether it is developed as a large-scale assessment, mandated by a state as part of its accountability system or is a classroom assessment designed by a single teacher or a team of teachers.
To determine the purpose, articulate exactly what you are trying to measure. If you are using a commercial or state-mandated test, the purpose of the test has already been determined by the test authors. For a commercial test, the purpose is always revealed at the beginning of the technical manual (sometimes called the 'blueprint'), but most teachers would be surprised to read what the test authors say the purpose is – always check. Further, it's important to identify the conceptual or theoretical framework upon which the test was built. If possible, read in the technical manual how the test constructors define language (if it is a measure of language) and on what conceptual or theoretical framework the test is built. Beware of language tests that are vague about articulating a language theory or tests built on the judgment of a committee of experts. If you still don't know the purpose of an assessment, ask your administrator to clarify. Be cautious about moving forward with the assessment without understanding the purpose.
(Continues...)
Excerpted from The Assessment of Emergent Bilinguals by Kate Mahoney. Copyright © 2017 Kate Mahoney. Excerpted by permission of Multilingual Matters.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.