ТОП просматриваемых книг сайта:
The Handbook for Collaborative Common Assessments. Cassandra Erkens
Читать онлайн.Название The Handbook for Collaborative Common Assessments
Год выпуска 0
isbn 9781942496878
Автор произведения Cassandra Erkens
Жанр Учебная литература
Издательство Ingram
Team Created or Team Endorsed
The entire team must either write the assessment together or co-review and endorse the assessment that it has selected for use. This detail matters greatly. Asking teachers to give an assessment over which they have little ownership is like asking them to ride a city bus and care deeply about the road signs the bus encounters along the way. They will care deeply about the many road signs only if they are driving the bus. Moreover, if one person writes the assessment for the team and something goes wrong with the assessment process, the team generally blames the author. The entire team must take an active role in determining the assessments that it will use to monitor its instruction.
Designed or Approved in Advance of Instruction
Everyone loses when teachers retrofit assessments to the instruction that preceded the testing experience. Since instruction is the visible and immediate actionable step in the teaching and learning process, it feels natural to plan it first. However, a closer look reveals how that practice costs teachers and students time and learning opportunities. Teachers lose because they have to try to remember all the things they said during instruction and then begin the time-consuming process of prioritizing what’s important to test. Many times, this leads to inaccurate assessments, primarily because they don’t align to the standards. Instruction that wanders without a known, specific target has no chance of hitting its desired mark for teachers or their learners (Erkens et al., 2017; Hattie, 2009; Heritage, 2010, 2013; Wiliam, 2011, 2018). When teachers don’t frame the assessment road map or architecture in advance of the instruction, the instructional designs can misfire, and learners then miss critical components and interconnected concepts.
The greatest concern when teachers retrofit assessment to instruction, however, is that inaccurate assessments yield inaccurate results. In such a case, both the teacher and the learner draw conclusions based on dirty data. Dirty data contain inaccuracies, hide truths with oversimplifications, or mislead with false positives or false negatives. Such data can only lead to inaccurate feedback. When that happens, learners cannot receive the appropriate support they need to master not only what they learn but also how they learn. Conversely, when teams clarify summative assessments in advance of instruction, teams are often able to find instructional time, instead of waste it, because they can strategically determine what it will take for each learner to be successful on the assessment, they can ensure alignment of their assessment and curricular resources, and they can respond more accurately and with a laser focus in their intervention efforts. While the educational literature has recognized this model—backward design—since the 1990s (Jacobs, 1997; McTighe & Ferrara, 2000; Wiggins & McTighe, 2005), it is still not a prevalent practice.
Administered in Close Proximity by All Instructors
While most teams succeed in having all students take a common assessment on the same day, that isn’t always doable, as many things (school cancellations, emergency drills, and so on) can easily interrupt the school day. If teams are to respond to learners who have not yet achieved mastery and learners who need extension, then individual teachers must give the assessment in a relatively short time frame so that they can collaboratively respond in a timely fashion.
Imagine that a team has designed an assessment task that requires students to use the school’s only computer lab, so the team members’ students take turns using it (for example, teacher A’s students use the lab and complete the task in September, teacher B’s in October, and teacher C’s in November). This is the same assessment, but it does not function as a common assessment should. The team members provide the exact same task with the same criteria and grade-level content. However, the team members are on their own for strategizing how to intervene or extend the learning for their individual classrooms. They miss the power of the collective wisdom and creativity of their peers in addressing the challenges that emerge from their individual results. In a case where teachers do not give the same assessment in the same time frame, teams can only look at the data in hindsight and then produce program-level improvements that answer the following questions.
• “Was the assessment appropriate and engaging?”
• “Were the scoring criteria accurate, consistently applied, and sufficient?”
• “Did the curriculum support the learners in accomplishing the task?”
• “Were the instructional strategies successful overall? Do we need to make any changes moving forward?”
The pace of data collection in this case cannot support instructional agility. The learners in September will not benefit from the team’s findings in November, when all the learners have finished the task.
Dependent on Teamwork
The collaborative common assessment process requires teamwork to help ensure accurate data; timely re-engagement; consistent scoring; and alignment between standards, instruction, and assessment so all students learn. Collaboration is central to the process as teams examine results, plan instructionally agile responses, analyze errors, and explore areas for program improvement.
Collaboratively Examined Results
When teachers use a common assessment, that does not guarantee it will generate common results. The notion of common data implies a high degree of inter-rater reliability, meaning the data generated are scored similarly from one rater to the next. Even when using test questions that have clear right and wrong answers, teachers can generate uncommon results. For example, teachers may interpret student responses differently, or some teachers may offer partial credit for reasoning while others only offer credit for right answers. Many variables impact the scoring process, and many perceptions lead teachers to different conclusions, which can create data inconsistency from classroom to classroom. No matter the test method, teachers must practice scoring together on a consistent basis so that they can build confidence that they have inter-rater reliability and accurate data.
Instructionally Agile Responses
The purpose of using collaborative common assessments is to impact learning in positive, responsive, and immediate ways, for both students as learners and teachers as learners. When teachers analyze assessment data to inform real-time modifications within the context of the expected learning, they improve their instructional agility and maximize the assessment’s impact on learning. It seems logical that teams of high-quality instructors will have more instructional agility than an individual teacher for the following reasons.
• More accurate inferences: Teams have more reviewers to examine the results, conduct an error analysis regarding misconceptions, and collaboratively validate their inferences.
• Better targeted instructional responses: Teams have more instructors to problem solve and plan high-quality extension opportunities for those who have established mastery, as well as appropriate corrective instruction for those who have various misconceptions, errors, or gaps in their knowledge and skills.
• Increased opportunities for learners: Teams simply have more classroom teachers surrounding the learner who can provide informed interventions and skilled monitoring for continued follow-up.
This is not to suggest that teams will always develop better solutions than individual teachers might, especially if an individual teacher has reached mastery in his or her craft, knowledge, and skill. Rather, it is to suggest that educators can increase the likelihood of accuracy, consistency, and responsiveness over time if they collaboratively solve complex problems with the intention to increase their shared expertise and efficacy.
Error