Скачать книгу

href="#ulink_a8bbeb89-fd54-5330-a482-08c3f367b861">figure I.2.

Image

      Visit MarzanoResources.com/reproducibles/leadership to download a reproducible version of this form.

      Notice that the collaborative team member who initiated the quick conversation has recorded the respondent’s role (teacher), the questions asked, the code assigned to each response (adequate for question 1, unsatisfactory for question 2), and any pertinent notes from the conversation.

      At the end of the month, the team aggregates the responses, as depicted in figure I.3.

Image

      Visual representations of data, such as those in figure I.3, allow school leaders to quickly identify problems, take steps to mitigate their effects, and resolve unsatisfactory situations. Here, school leaders might decide to reexamine the processes in place to collect information about teachers’ opinions. Additionally, graphs like these give members of the school community a quick look at areas where the school is excelling and allow for celebrations of success.

       Quick Observations

      Like quick conversations, quick observations are made by teachers from collaborative teams. As the name implies, quick observations are specific events teachers look for. For example, for the first two leading indicators at level 1, teachers could be asked to observe recent incidents that indicate the following:

      • The school is a safe place.

      • The school is an unsafe place.

      • The school is an orderly place.

      • The school is not an orderly place.

      School leaders could also design observation prompts from their school’s lagging indicators. Quick observation data would be collected anecdotally. Table I.4 shows one collaborative team member’s anecdotal notes about incidents observed over the course of a week.

Image

      Visit MarzanoResources.com/reproducibles/leadership to download a reproducible version of this form.

      On a regular basis, notes (such as those in table I.4) collected by collaborative team members could be compiled into a narrative summary and shared with members of the school community.

       Easy-to-Collect Quantitative Data

      In many schools, easy-to-collect quantitative data are available and can be used to monitor progress on a regular basis. Such data are typically collected by school leaders. For example, if a school leader already has in place a system that keeps track of student absences and tardies, she would aggregate these data once a month as a way of monitoring level 1 performance.

      As schools achieve higher levels of reliability, they should continue to monitor each level already achieved. Thus, a school that has achieved level 3 high reliability status will constantly monitor data for levels 1, 2, and 3 as it works on level 4. If quick data show that performance is unsatisfactory at any level, schools take steps to remedy the situation. In this way, problems are resolved before they cause significant errors in the system.

      Problem prevention is an excellent reason to constantly monitor critical factors and address errors immediately. However, it is not the only reason to monitor performance. Tracking performance using quick data allows school leaders to celebrate successes with staff members, parents, and students. Research by Edwin Locke and Gary Latham (2002) has shown that feedback—especially positive feedback—is important in keeping people motivated to achieve or maintain goals:

      For goals to be effective, people need summary feedback that reveals progress in relation to their goals. If they do not know how they are doing, it is difficult or impossible for them to adjust the level or direction of their effort or to adjust their performance strategies to match what the goal requires …. When people find they are below target, they normally increase their effort …or try a new strategy.… After people attain the goal they have been pursuing, they generally set a higher goal for themselves. (p. 708)

      School leaders can use quick data to regularly celebrate the school’s successes; congratulate students, teachers, and parents on their hard work; and motivate the school community toward continuous improvement.

      As mentioned previously, schools should not only collect quick data for the level they are currently working to achieve but also for all of the lower levels they have already attained. However, we do not advise that schools try to collect data for levels higher than the one they are currently working on, because data for higher levels often cannot be collected by schools at lower levels. This is particularly true for levels 4 and 5. For example, a visitor to a school working on level 5 (that is, a school that has already achieved high reliability status at levels 1, 2, 3, and 4) might ask a student, “What level are you at in mathematics? Science? Social studies? English language arts? What measurement topics are you working on? What is your current score on those measurement topics? What are you doing to raise your score?” The student should be able to answer most, if not all, of these questions. However, a visitor who asked a student in a school working on level 2 the same questions would likely get a blank look. Because the school is not working on level 5, it is impossible to collect level 5 data there.

      The hierarchical nature of our model is one of its most powerful aspects. Each level guarantees that a school is also performing at all of the lower levels. So, if a school is working on level 4 and has achieved levels 1, 2, and 3, it is guaranteed that the school has a safe and collaborative culture, effective teaching in every classroom, and a guaranteed and viable curriculum. By definition, working on level 4 means that lagging indicators for the first three levels have been met and the status of each is continually monitored. Each level supports the one above it and guarantees specific outcomes for those below it.

      The process of achieving high reliability status for a given level is fairly straightforward. The teacher and administrator leading indicator surveys (from chapters 1 through 5) are administered, and if a school wants a more comprehensive set of data, the student and parent surveys are also administered. Scores on the surveys are analyzed to determine the school’s strengths and weaknesses. The analysis process for interpreting survey results should be designed to identify those items that represent actions considered important to the effective functioning of the school and whose average scores are low.

      Items that have these characteristics are candidates for interventions—programs or practices the school will implement to shore up weak areas. Once these programs or practices are in full implementation, a school

Скачать книгу