{{{ #!html
My study involves almost an accident of discovery. In studying the results of the 1988 Advanced Placement Exam in Computer Science, I stumbled upon a strange pattern of correlations. I wasn't looking for this pattern, but once I saw it, it seemed clear to me that it suggested that a small group of questions on the multiple choice section of the exam were testing some kind of aptitude that predicted success on the entire exam.
Computer Science instructors have for a long time believed that there are some students who more quickly learn programming. This is often described as those who "get it" and those who don't. Donald Knuth has written about this phenomenon in Selected Papers on Computer Science. Knuth theorizes that approximately 2% of the population are adept at what he calls algorithmic thinking. Most recently this topic was discussed in a paper called "The Camel Has Two Humps." This paper generated a great deal of discussion.
This issue is important in many respects to computer science educators. If it is true that some students have a greater aptitude for computer science, then it would be helpful to identify them as early as possible. We might design different instructional experiences for students depending upon their level of aptitude. And we might find that we can improve instruction for all students if we better understand the thought processes that are involved in computer science problem solving.
The data for the study came from the 1988 Advanced Placement Exam in Computer Science. There were two formats, with 7,374 students taking the AB exam and 3,344 students taking the A exam (the A exam was a subset of the AB). I studied primarily Pearson correlations between various exam items.
I found a particular group of five questions that had statistically significant correlations (0.2 or higher) with a large number of other test items. There weren't many correlations that were as high as 0.2, so it was odd to find that these five had a large number of such correlations. They also correlated well with the free-response (hand written) questions. As an example, one multiple choice question on the test correlated at the level of 0.2 or higher with 23 other multiple choice questions (nearly half) and it was the most highly correlated with four of the five free response questions. That particular question inspired the title of my paper because it involved understanding the rather strange line of code "b := (b = false)".
I believe my study provides strong evidence for Knuth's algorithmic thinking. But obviously this deserves a great deal more study. There are many questions left unanswered. For example, do students possess some kind of aptitude before studying computer science? Or is it possible that this is an aptitude that develops with practice? If we repeat the study with another group of AP students, will we see a similar pattern? Can we develop an assessment tool that has the predictive ability of these five questions?
Author: Stuart Reges; reges@cs.washington.edu
Article Link: http://portal.acm.org/citation.cfm?id=1352135.1352147&coll=GUIDE&dl=GUIDE&CFID=35014869&CFTOKEN=56797477
}}} [https://stemedhub.org/groups/cleerhub/wiki/issue:1334 : Back to 2009 Spring/Summer Issue Vol. 4, No. 3] [[Include(issues:footer)]]