The Use of Rubrics for Effective Feedback

Todd Hawkins

As the lead language arts editor for a publisher specializing in K-12 standards-based assessment preparation, I have focused on the development and editing of educational materials that help students in the United States improve their own editing skills.

Analysis of Scoring Guides for the Texas Assessment of Knowledge and Skills (TAKS)

My graduate research has been concentrated on the intersection of technical editing and education. It especially considers ways to help students edit their own work as well as that of their peers.

I have been particularly concerned with the editing and revision components of standardized writing assessments for K–12 students. Although standardized testing certainly has its detractors, I am a firm believer in a system that challenges students to attain high state-mandated standards for knowledge and skills in core areas—and in a system that provides necessary resources for success to all stakeholders: students, parents, educators, administrators, and policymakers. Last fall, I conducted an analysis of scoring guides for the Texas Assessment of Knowledge and Skills (TAKS) writing tests, evaluating the role of rubrics, writing samples, and explanatory annotations as instructional tools.

Rubrics as Instructional Aids

In writing assessment, holistic rubrics are used not only for scoring, but also as instructional aids. Rubrics improve learning and facilitate instruction by making expectations explicit and transparent, showing what content is important, and providing means for feedback and self-assessment. Accordingly, the Texas Education Agency (TEA) makes its rubrics available so stakeholders might better prepare for the TAKS. The TEA also provides additional materials supporting the rubrics: sample student compositions for each score point and explanatory annotations describing the scoring rationale for these sample student compositions. Each rubric contains five evaluative criteria: focus and coherence, organization, development of ideas, voice, and conventions; the explanatory annotations describe which evaluative criteria were used to reach the score point awarded on the sample and how those criteria were applied. However, several annotations do not mention every evaluative criterion from the rubric. Although holistic rubrics use broad analyses of the features of the entire text, this does not mean that certain evaluative criteria should be neglected; rather it means all criteria should be aggregated into a single score determination. Valuing one feature of the rubric more than others weakens the usefulness of the rubric by inserting personal scoring biases into the evaluation process. In other words, students are best served by feedback on all evaluative criteria used to reach a given score point.

Therefore, I examined the internal consistency of the instructional materials provided to stakeholders by the TEA to determine whether students are being given the greatest degree of assistance possible by these materials. I conducted a textual analysis on these explanatory annotations to determine whether any features of the scoring rubrics were mentioned in them more frequently than others, and if so, which ones. I created codes based on the wording of the evaluative criteria in the rubrics and used these codes to identify discussions of the evaluative criteria in the annotations. I limited my data set to the scoring guides for grades 4 and 7, the only elementary and middle school grades in which writing is assessed on TAKS.

Discrepancy between the Evaluative Criteria in the Rubrics and the Explanatory Annotations

The coding process revealed a discrepancy in the way evaluative criteria were discussed in the annotations. The analysis revealed that 100 percent of all annotations analyzed treated focus and coherence, but only 73 percent mentioned conventions. In each grade, conventions was mentioned least often. When the distribution of references to evaluative criteria was examined according to administration year, voice and conventions were often the most underrepresented evaluative criteria, while focus and coherence was always the most commonly represented criterion.

This suggests the TEA is not discussing evaluative criteria in a balanced way as applied to sample student essays in TAKS scoring guides. Thus, stakeholders may not be receiving the full benefit from the annotations. For example, they may be misled into placing greater emphasis on strategies for developing voice at the expense of other criteria. They may be led to consider conventions as relatively less important. Such beliefs, if they influence practice and instruction, could be disastrous, for frequent errors in conventions can be enough to overwhelm an essay’s strengths, negating strong voice, descriptive details, sharp focus, and coherent organization and warranting a score point 1.

Transitional Period for Assessment in Texas

These findings come at a major transitional period for assessment in Texas. The TEA is completely overhauling the state’s assessment program: the statewide English language arts standards were extensively revised in 2010, and TAKS will soon be replaced with the State of Texas Assessment of Academic Readiness (STAAR), which will be first administered in 2012. Each STAAR writing assessment will consist of two direct writing exercises representing different modes of writing, with a separate rubric for each mode. Therefore, with this new assessment model, there may be even more opportunities to take advantage of the instructional power of rubrics.

I feel blessed and honored to have received the graduate scholarship from the STC Technical Editing SIG. This award will assist me significantly as I continue to pursue my Masters degree and look for new ways to offer research-based support for educators guiding the next generation of editors into the workplace.

Todd Hawkins has won the 2011 Diane Feldman STC Technical Editing Graduate Scholarship.

Leave a Reply