Learning Styles Across the Curriculum
Research has shown that there's a relationship between learning style and performance in specific subject areas, including computer science. Before conducting the research reported in the subject paper, we'd performed a smaller-scale study to examine the relationships between student learning style and performance in an introductory computer science course. We discovered enough statistically-significant results that we decided to complete a broader study in which we studied the relationships between student learning style and performance across the required courses in a standard undergraduate computer science curriculum at the U.S. Air Force Academy (USAFA). The research question for this study was "What are the relationships, if any, between student learning style and student performance across the entire computer science curriculum?"
The data required for this study included both learning style and course performance data for each included student. We used three instruments to collect the learning style data: the Felder Index of Learning Styles, the Kolb Learning Styles Inventory II ’85, and the Keirsey Temperament Sorter. We administered these instruments in the introductory computer science course from Fall 2007 through Fall 2000; this course is required for all students at USAFA. We had a standard department process for archiving course data at the end of each semester, so with a few exceptions student performance in each course was available from the course archives.
The dataset for the study included computer science majors from the Classes of 2001, 2002, 2003, and 2004. We only included students who graduated with a computer science degree, yielding a group of 53 students. The independent variables for our analysis were the measures of student learning style discussed above. Specifically, the set of independent variables was comprised of: Felder scores for the four dimensions (Active/Reflective, Sensing/Intuitive, Visual/Verbal, and Sequential/Global), Kolb scores for the learning cycle (Concrete Experience, Reflective Observation, Abstract Conceptualization, and Active Experimentation), and Keirsey classifications for the 4 dimensions (Extravert/Introvert, Intuitor/Sensor, Thinker/Feeler, and Judger/Perceiver). Course performance data for the required computer science courses provided the response variables for our analysis. For each course, we included the student percentages on the assessments in the course as well as the student’s overall percentage in the course and their grade in the course. The resulting set of response variables was comprised of 89 variables for the 12 courses included in the study.
The Keirsey classifications for the different dimensions are bivariate and unranked; they're therefore measured on the nominal scale. We used a combination of the Mann-Whitney and Kolmogorov-Smirnov Z tests to compare the means of the two distributions for each dimension (Extravert and Introvert, for example). The Judger/Perceiver dimension yielded 11 of the 22 statistically significant results from this part of the study; note that this represents statistically significant results in this dimension for over 12% of the response variables included in the analysis. Students who were classified as judgers performed better than students who were classified as perceivers for 8 of the response variables. The Intuitor/Sensor dimension yielded 7 of the 22 significant results from this part of the study. In all 7 cases, students who were classified as sensors performed better than students who were classified as intuitors. The Thinker/Feeler and Extravert/Introvert dimensions only yielded 3 and 1 significant results, respectively.
Unlike the Keirsey classifications, the Felder and Kolb independent variables are measured on the interval scale. To examine the relationship between these independent variables and student performance, we correlated each of these independent variables with each of the response variables using Pearson's Correlation Coefficient. We found 53 statistically significant correlations between the Felder and Kolb variables and the response variables. The most compelling correlation results were between Kolb’s measure of predilection toward Concrete Experience and the response variables. We found 19 such correlations, with magnitudes ranging from 0.298 to 0.737. All of the correlations were negative, indicating that students with a stronger predilection toward Concrete Experience were likely to perform more poorly on a wide variety of assessments in 5 of the 12 courses included in the dataset. We also found 10 significant correlations between Felder's Sequential/Global dimension and the response variables, ranging in magnitude from 0.315 to 0.622. All of these correlations were also negative, indicating that students who are classified as more sequential than global tend to perform better on more than 11% of the response variables.
We're faced with an interesting paradox as we consider the results of our analysis. As researchers, we're interested to find numerous statistically significant results. As teachers, however, we would rather find that our teaching techniques foster an environment in which all student learning styles are addressed so that a student’s learning style doesn’t have any noticeable effect on their course performance. Although we found a wide variety of statistically significant results in our analysis, we believe it's unreasonable to expect significant results across all course assessments for all dimensions of the various learning style models included in the case study. Such wide-ranging results would be more indicative of an unbalanced learning environment catering to specific learning styles to the exclusion of others rather than providing meaningful insights about learning styles and student performance. As more empirical work is completed in this area, however, we may discover persistent relationships between learning style and student performance on particular computer science activities.
Although our approach is generally applicable across universities and disciplines, we caution that there are limitations to the generality of the specific results of our analysis. As for any dataset drawn from a single university, the computer science students at USAFA may not form a representative sample of computer science students in general. We therefore suggest that others apply similar approaches for their computer science curricula at other schools. Some of the insights to be gained would be course-specific, supporting pedagogical changes to the course as required, while some might also contribute to more general insights about the relationships between learning style and student performance across typical computer science curricula.
Author 1: A.T. Chamillard; chamillard@cs.uccs.edu Author 2: Ricky E. Sward; ricky.sward@usafa.edu
Article Link:http://portal.acm.org/citation.cfm?id=1067512&coll=portal&dl=ACM&CFID=9122936&CFTOKEN=32438184