Feature                                                    Pages 26-29


Moving Beyond Fear of

Student Feedback

One principal’s view: Teachers care about perceptions and can apply these data to improve their delivery


For years, colleges and universities have used end-of-course assessments that ask students to evaluate their instructors’ knowledge of course material and their ability to communicate that material to the students.

Lisa Oliveira, principal of King Philip Regional High School in Wrentham, Mass., talks to school staff about how to use data from the I-SAID, a formal survey instrument completed by students.
As an instructor, I found these assessments useless. So what if at the end of the course my students reported I did not deliver instruction effectively? So what if at the end of a course the students did not believe the unit was organized well or thought that the material was difficult? Sometimes, I glanced over my end-of-course survey results, and as long as my scores averaged 4 out of 5, I didn’t bother to read further.

It’s not that I didn’t care about the quality of my instruction, but end-of-course assessments did not help me improve instruction for the students I had in front of me that semester. In addition, the feedback I received lacked the details needed to create action steps for improvement in the future.

For nearly 20 years, I had been evaluated as a public school teacher and principal by the typical method of pre-observation, observation, post-observation and narrative summary. When the Massachusetts Department of Elementary and Secondary Education began revamping the state’s teacher evaluation model to include student feedback in addition to student academic growth and performance based on classroom observation, I was excited. Now students, our primary consumers, would be able to have a say in how well we support their learning during the year rather than at the end of the course.

Still, as eagerly as I viewed this prospect as the principal of King Philip Regional High School in Wrentham, Mass., I knew the teachers were frightened by the notion of students weighing in on their livelihood.

Unlike the general end-of-course assessments I experienced as a college instructor, the new Massachusetts Model System for Educator Evaluation is high stakes for teachers, and some of the teachers at my school were worried about the objectivity of the students’ feedback. Some students have bad days and personal problems that prevent them from engaging in learning. Some students detest everything about school and, despite the efforts of great teachers, they view every aspect of school in a negative light. The teachers in my school were fearful these student biases might affect their professional evaluations.

A Practical Punch
Student feedback is meaningful only if teachers are able to use it to inform their instruction. For my doctoral dissertation in 2013, I had delved into the importance of reflection on student feedback as a means to improve instruction, along with the goal of developing a tool that could elicit student feedback about teacher behaviors that had been proven to improve instructional delivery.

During my research, I had frequent conversations with teachers about best practices, the effective use of student feedback in improving instructional delivery and the benefit of reflecting on student feedback to enhance conversations with colleagues. We also talked about how student feedback should be incorporated into the new statewide teacher-evaluation system.

I combined my research on evidence-based teaching practices with insights gleaned from my conversations with teachers and colleagues to create the Individual Student Assessment of Instructional Delivery tool, also known as I-SAID. My goal was to develop a practical instrument that teachers could use to understand their students’ perceptions about instruction and learning in their classrooms, to reflect on their instructional practices and to develop their own professional learning goals. The tool also needed to meet the criteria of the Massachusetts Model System for Educator Evaluation, which gives districts the opportunity to adopt or adapt the model put forth.

One of the theoretical frameworks for Massachusetts’s educator evaluation system is Charlotte Danielson’s Framework for Teaching (see related article, page 30), so it made sense to use the same framework for I-SAID. Because learners are the target audience, Danielson says, they should have the opportunity to reflect on the delivery of instruction as it pertains to them and provide feedback to their teachers. For that feedback to be valuable, it must be specific and lend itself to helping teachers develop measurable action steps focused on improving instructional delivery.

Rating Statements
In the King Philip Regional School District, with its 2,140 students, we had been using the SMART goal format for several years. These specific, measurable, achievable, realistic and time-bound goals provide teachers with a road map for improving their instructional practices. One aspect of a student feedback tool that teachers cited as important was that it could be administered several times throughout the school year, enabling them to assess progress toward their SMART goals and adjust their practices while the students were still in their classrooms.

I took this all into consideration as I developed the I-SAID tool. I-SAID is a Likert instrument with 31 statements. Students are asked to rate the statements on a range from “strongly agree” to “strongly disagree.” I developed two to three statements that collected evidence on specific elements of Danielson’s Framework for Teaching. For example the element “Expectations for Learning” under “Component 3a: Communicating with Students” includes these statements:

  • Near the end of the class, we review what we have learned to check if we met the learning goal.
  • The learning goal is posted in my classroom.
  • I know how each activity supports our learning goal.

This way, teachers could administer the complete assessment, reflect on the data and then re-administer only certain aspects of the I-SAID after making adjustments to their practice. This also mitigates the “practice effect” that sometimes happens when the same instrument is administered to the same group repeatedly.

The I-SAID can be administered to students using paper and pencil or electronically, and it takes about 20 minutes to complete. Our district will allow only teachers to see the direct results of the I-SAID unless individual teachers decide to share the results with their evaluator. The evidence of use will be in the form of the SMART goals that the teacher develops based on student feedback. This will help negate many of the concerns teachers expressed about student bias, as they will decide where to focus their goals.

Tackling Fear
In early November 2013, seven high school and seven middle school teachers in the district administered the I-SAID assessment as part of a pilot study. These teachers were asked to reflect on the data results and share how that information could help them in their instruction.

All 14 teachers said the information gathered from the I-SAID would help them create SMART goals with a focus on improving their instructional delivery. Because they could administer I-SAID more than once, the teachers said they could gather data from students at several points throughout the school year, thus measuring their own progress toward meeting the goals.

The pilot teachers said the most important benefit of using a student feedback survey such as I-SAID was that the tool provides them with a common language for their conversations with their colleagues around teaching methodology from the perspective of their students. Reflecting on the results also helped inform their professional development needs and expanded their knowledge about how students see them as teachers.

Finally, with regard to the role of student feedback in teacher evaluation, the research participants and local school administrators discussed how to incorporate I-SAID into the new state-mandated evaluation system. The teachers felt most comfortable incorporating the results into their SMART goals and sharing the goals and their progress toward their goals rather than providing the actual student response data as part of their evaluation.

This made perfect sense. The teachers did not want to share specific student feedback with me unless they could be sure it was a valid representation of who they are. Conversely, the teachers felt comfortable discussing the goals they crafted after reflecting on their data. They identified strengths and weaknesses and were in a hurry to implement change and administer a follow-up measurement to gauge progress.

Perceptions Matter
Student feedback can be a key aspect of a solid teacher-evaluation system when teachers use the data to craft goals focused on improving their instruction. Reflecting on student feedback also helps teachers understand how their students perceive their practices and provides data to help them set targeted goals.

Even though the mandate takes effect in 2014-15, the state education agency has not yet issued guidance on the use of student feedback in the teacher evaluation process. Still, we are planning implementation and excited about the prospects. The teachers who participated in the pilot indicated that using student feedback paired with self-reflection helps them strengthen their instructional delivery, and allowing them to be in control of the data takes much of the fear out of collecting it.

Perhaps most importantly, teachers care about their students’ perceptions, and those who participated in the pilot benefited from a deeper understanding of that perception.

Reflecting on student feedback helps educators and administrators keep the focus of education where it should be — on our students.

Lisa Oliveira is principal of King Philip Regional High School in Wrentham, Mass. E-mail: oliveiral@king​philip.org. Twitter: @Lisa Oliveira294


Give your feedback

Share this article

Order this issue