In this study, researchers did a study about the validity of peer assessment with the use of rubrics. The participants were 58 second and third year students at a major US law school. Students uploaded essays they wrote for a take-home midterm exam onto a web application for peer review called Comrade and rated at least four of their peers’ essays. Students used nicknames, and were therefore anonymous to each other. The syllabus prompted students to provide constructive and relevant feedback as part of their grade. For the study, the researchers tested two different kinds of rubrics: domain-relevant rubrics and problem-specific rubrics. Domain-relevant rubrics are rubrics that contextualizes general criteria within a domain or a discipline and assesses how well an essay is written. A problem-specific rubrics are much more specific and pertain to assessing how well an essay fits a certain assignment or answers a certain rhetorical situation. The researchers hypothesised that the problem-specific prompt would provide more reliable assessments. Reliable, in this case, is defined as agreeing with the instructor’s summative assessment of the essay. The researchers also expected students to find peer review from both rubrics useful. In order to research peer assessment and feedback, they collected the essays, the peer ratings, the instructor summative assessment, author feedback about peer assessment, and LSAT scores.
The researchers found that the reliability and validity of both rubrics was roughly equal. Though they found that in some categories, one rubric may have been more reliable than the other, the researchers did not have a large enough sample size to make a definitive judgement. Students found feedback from both types of rubrics useful. They found no correlation between LSAT scores and performance on the midterm, which the researchers called “problematic,” though they did acknowledge that LSAT scores are often compared to the grades of first year law students, not second and third year students.
This study was interesting and fairly thorough in the artifacts that it collected. It also provided an abundance of background information on some of the theories behind formative assessment and peer review and making rubrics. This study may be useful for instructors who wish to create a peer review process through the use of rubrics, as the rubrics used in the study are provided in the article and are useful examples. Looking at the online application Comrade may also be worthwhile for composition instructors who want to use technology to facilitate the peer review process. The student workload may discourage some instructors from using this design in their own classrooms, however. While each essay was reviewed a total of four times, each student was expected to take a total of two hours on reviewing peer essays, and it is unclear whether this took place in the classroom or not. Instructors are often pressed for time, so it may not be possible to fit this amount of peer review into a composition classroom, especially at lower levels.
I assume you are looking at peer-revision in an online context as a topic of focus? I'll have to see what I can learn. Just a day or two ago, I got the idea to try to make the peer revision process in my basic writing class a hybrid one. I haven't liked the superficial results of in-class peer revisions, and students seemed to have gained little. I wondered if it would help to develop a process where students would read the composition in a digital format and fill in a form that is emailed back to their classmate. It would, at any rate, limit their tendency to pencil in corrections, something that always seems to be a temptation for them.
ReplyDeleteI agree that this has some interesting possibilities for classroom use. I am with Laurie in that my class peer-reviews are not successful at all...less in the online class than in person, so mine already are using a digital format. I would like the anonymity of a peer-review exercise to see if it is the knowing of their classmates that might be impeding their feedback potential? What I hear is that they aren't confident in their own abilities enough to critique others...I do like at least a basic rubric used as it can help guide a reader and move away from only proofreading or grammar issues. A lot to think about from the article--thanks!
ReplyDelete