• Discoverability Visible
  • Join Policy Restricted
  • Created 07 Jul 2014

article:1933

Peer Assessment in the Algorithms Course

            The motivation for this study was to gain a better understanding of the effects of peer assessment on student learning. Specifically, the study addresses the following research questions: (1) Can the skills associated with evaluation and critical judgment be taught in the computer science classroom?, (2) Is there a relationship between student performance in peer assessment and performance in other assessment activities in the course?, and (3) To what extent does peer assessment reveal student misconceptions and unsound lines of reasoning?

            The pedagogical value of peer assessment is expressed in several models. In Bloom’s taxonomy, evaluation is the highest in the hierarchy of assessment tasks. In the Perry model, the more advanced stages of intellectual development of students are characterized by a greater reliance on standards of evidence and less reliance on appeals to authority. Thus, the instructor of the course is not the only legitimate source of knowledge. Finally, peer assessment can be seen as a way to apply Vygotsky’s concept of the zone of proximal development, which is the set of activities a student can perform, but only with the assistance of others.

            Students’ assessments of other students’ work in the algorithms course were collected and analyzed. Two or three clear deficiencies of each student solution to a problem were identified, and then each student assessment of that solution was analyzed to see if it identified those deficiencies. For each deficiency, a rating of 0, 1, 2, or 3 was assigned to each student assessment according to the following rubric:

3: the assessment recognized the error and either offered a correct explanation for why it was wrong or offered a suggestion that it would lead to a correct solution

2: the assessment recognized the error and either said nothing more or provided a partially correct explanation or suggestion

1: did not recognize the error or recognized the error and provided an incorrect explanation or suggestion

0: did not recognize the error and showed additional evidence of not understanding the problem or fundamental concepts relevant to the error.

            Also, each deficiency in the student solution was classified as either conceptual, explanatory, or technical.

            The main results of the study are: (1) students’ ability to assess other students’ work does not diminish as the quarter progresses; in fact it appears to improve slightly over time, even though the complexity of the problems later in the course is greater than in the early part of the course, (2) there is a strong correlation between student performance in the peer assessment activity and other assessments (i.e., homework and exams) in the course, and (3) students appear to be better at identifying conceptual errors than explanatory errors, and they are better at identifying explanatory errors than technical errors. Note that in result (2), no causal relationship in either direction was established. Result (3) might be explained by the possibility that students simply did not want to write out the relatively tedious details needed to demonstrate that a technical error was indeed an error.

            The study indicates that it is feasible to incorporate peer assessment into the algorithms course in computer science. Evaluation of other people’s solutions is an important skill all engineers should develop, and so such activities can and should be a part of a course’s design.

Author 1: Donald Chinn dchinn@u.washington.edu

Article Link: http://portal.acm.org/citation.cfm?id=1067468&coll=portal&dl=ACM&CFID=9122936&CFTOKEN=32438184


: Back to 2009 Spring/Summer Issue Vol. 4, No. 3

: Back to List of Issues

: Back to Table of Contents

Created on , Last modified on