{{{ #!html

Technology Talks: Clickers and Grading Incentive in the Large Lecture Hall

While perusing articles for an education research journal club, I read an article that really struck me due to the simplicity of the study, ease of possible replicability and cleverness of research questions being asked.  Mark James’ paper on clickers inspired me to begin digging deeper into the clicker literature, and eventually to do my own clicker study based in large part on his work (James, 2006). In his study, James looked at the grading incentive of clicker questions in two lower level astronomy courses:  in one classroom (named low stakes) clicker points were awarded based on participation alone, whereas in the high stakes classroom clicker points were  awarded based on choosing the correct response to a posed clicker question.  He found that pairs of students in the low stakes classroom were more likely have conversations in which both partners contributed equally and were less likely to block vote than their peers in the high stakes classroom.  Within the latter classroom, conversations of student pairs were often dominated by only one student and the pairs were much more likely to block vote (i.e. each partner would choose the identical response to a given clicker question). 

We chose to replicate this study in our large introductory astronomy courses for several reasons: each of the two sections were taught by the same instructor (one author of the study), and the material was identical in each section, as were the clicker questions asked.  Overall, the two sections were essentially identical (except for the grading incentive for clicker questions) allowing for a more tightly controlled test of the research question: how does grading incentive of clicker questions alter student learning, behavior and voting patterns?  After asking the students to sign informed consent forms (with the incentive that those who signed the forms granting permission to be in the study would be eligible for a raffle at the end of the semester) we began recording groups of four students while they discussed clicker questions during class.  In each section, student groups were chosen at random to be recorded on three predetermined dates during the semester.  Only groups in which all four students had signed informed consent forms were put into the lottery, and as a result some groups were recorded more than once.  Quantitative data was also collected in the form of overall course grade, pretest and posttest scores on a reliable and validated astronomy diagnostic test (ADT) and we also collected the answers each student in the study chose to the clicker questions posed during the semester (Deming, 2002).   After the first semester of the study, we chose to continue having different grading incentives in the two sections but we did not record the students. 

The voice recordings were transcribed and a word count for each student was tallied.  We found that students in the low stakes classroom had more instances of asking a new question, stating an answer preference, and asking for clarification, whereas students in the high stakes classroom spoke less words overall and were more likely than their counterparts in the study to provide negative information (why not to choose a given answer) and state uncertainty. 

When we did the quantitative analysis of course grades, gains on the ADT and calculated the number of correct answers to clicker questions, an interesting pattern arose.  We found that during the recorded semester of the study differences in behavior between the high and low stake classes were much more pronounced than during the non-recorded semester of the study.  During the first semester, block voting occurred about 2/3 of the time in the high stakes class and less than half the time in the low stakes class, whereas the next semester both classes block voted just over half the time.  Although students in the high stakes classroom during both semesters of the study more often chose the correct answer to clicker questions than students in the low stakes classrooms, this did not translate into higher learning gains, as measured either by the ADT or overall class grade.  We concluded that perhaps the presence of the voice recorders in the classroom was altering the behavior of the students, because they were visually reminded several times during the semester that they were being studied!  Due to this unexpected conclusion we decided that we needed to continue the study for another academic year and we have just finished collecting data from the 2008-2009 academic year in order to gain more insight into what role the voice recorders had on student behavior.

References:

Mark James, The Effect of Grading on Student Discourse in Peer Instruction, American Journal of Physics Vol. 74, 689 (2006).

Grace Deming, Results from the Astronomy Diagnostic Test National Project, Astronomy Education Review Vol 1(1) 52 (2002).

Author 1: Shannon Willoughby; willoughby@physics.montana.edu
Author 2: Eric Gustafson

Article Link: http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=AJPIAS000077000002000180000001&idtype=cvips

}}} [https://stemedhub.org/groups/cleerhub/wiki/issue:1334 : Back to 2009 Spring/Summer Issue Vol. 4, No. 3] [[Include(issues:footer)]]