{{{ #!html

A Course-level Strategy for Continuous Improvement

 

Changes in US manufacturing practices in the past several decades to emphasize quality control theory have precipitated similar changes in engineering accreditation.  While not typically viewed as arising from political, economic and social influences, the ABET Engineering Criteria, with its requirement of outcomes-based assessment, has its origin in the American quality movement of the 80’s and 90’s; an economic response to global manufacturing competitiveness.

The Accreditation Board’s quality driven movement has generated an unprecedented sensitivity to assessment and tracking of student performance.  The resulting flurry of activity has many faculty and departments searching for and inventing various models for assessment and making linkages to teaching and learning.  Among the many models are those that use various forms of skills assessment, directly apply quality control theory, use student-based self assessments and/or use various other tools for tracking outcomes.  Most published efforts, however, have reported their processes in descriptive and qualitative terms.

The present experiment proposes a quantitative strategy for directly connecting student performance to skill-based (ABET) outcomes and illustrates the methodology over a wide variety of course formats as implemented by the author over a three-year period at Tennessee Technological University (TTU).

Here, a distinction has been drawn between the pre-ABET Engineering Criteria assessment environment and the current environment in which outcomes must be explicitly defined, performance tracked and assessment results utilized in a continuous improvement process.  Prior to ABET Engineering Criteria (formerly called Engineering Criteria 2000), most faculty in engineering colleges designed their courses in the requirements domain in which faculty place a value on course “requirements” such as homework, exams, attendance, projects, etc., and score each accordingly, typically with a single lumped grade for each.  This creates the familiar “course breakdown” or referred to here as the “requirements breakdown.”  By the end of the course, the faculty and students know how they performed on the requirements, i.e. what grade they received for homework or exams.

To satisfy ABET, however, it is not enough to provide the requirements domain performance breakdown alone.  In the ABET environment the following questions must be answered: How did students perform against Criterion 3x?  What changes were made to improve student outcomes as measured against Criteria… ?  What strategy is being used to ensure continuous improvement? and many others.  These demands explicitly create a new assessment environment that one might justifiably call an outcomes domain. As a result, the traditional requirements domain must be extended to adapt to the new outcomes domain since it is inadequate at answering most of the outcomes-based questions.

In the requirements domain, students are assessed according to what fraction of the overall problem is correct, generally irrespective of what skill-based elements of the problem are correct or incorrect.  In the outcomes domain, skill-based outcomes are identified and elements of each requirement are mapped to specific outcomes which are independently scored.  Fortunately, this does not mean that we must design or define an entirely new set of assessments, rather, we must become sensitive to the outcomes which are pre-existent in the assessments that we already use.  Existing text books and faculty files are filled with excellent assessment challenges.  The elements of outcomes, however, must be identified in each and assessed accordingly.  If done correctly the faculty produces not only a requirements-based score card, but also an outcomes-based performance record.

In a case study lasting three year, the author implemented outcomes domain record keeping in one conventional lecture-based course and tested the approach in three other learning environments including a laboratory (hands-on environment), a non-traditional technical elective (inquiry-/discovery-based environment) and a seminar (a self-learning environment).  Results from the three year study in the lecture-based course were studied by comparing the yearly outcomes breakdown, class-average performance against course outcomes and course requirements as a function of time and term-end class-average performance against outcomes. 

When requirements are quantitatively mapped to outcomes, it is simple for the instructor to identify which outcomes are focal points, which requirements assess for which outcomes and to adjust course content accordingly if imbalances are developing. Ordinarily, the faculty only knows what fraction of the course was exams, homeworks, etc., e.g. the requirements breakdown.  When using the outcomes domain assessment strategy, the instructor also knows what fraction of the course content is outcome 1, or outcome 2 and what fraction of this outcome is assessed by each requirement.

Time-based tracking was implemented so that performance against each outcome could be evaluated at any point during the term.  This proved to be useful in identifying if more or less emphasis should be placed on a particular outcome, for example if the assessment was over-testing on one or more outcome or if student performance was poor as measured against a particular outcome.  This was nicely demonstrated in the case study wherein design was noted to be deemphasized and design scores likewise lower than scores against other outcomes.

With a complete ongoing record of both class-average and individual student performance against each course outcome, the faculty is able engage in real-time intervention at both the course and individual student level.  Students performing poorly against a specific outcome can be given extra help and changes at the classroom-level can be made if large numbers of students are having trouble in a given outcome area. 

The methodology was found to be particularly helpful for courses that were mapped to the more difficult to define and assess outcomes, i.e. performance on interdisciplinary teams or life-long learning skills.  In these contexts, authentic assessments must be identified forcing one to consider what performance competency is and likewise creating a record of genuine assessment.

Finally, the methodology becomes generally useful only when it can be related to curriculum-level continuous improvement.  Ultimately, the objective must be to integrate the course-level information into a process that is summative and probes deep retained learning.  If strategically implemented throughout the curriculum in early, mid and capstone courses, this or a similar methodology may have value as one part of a comprehensive evaluation system.

Author 1:Joseph Biernacki jbiernacki@tntech.edu

}}} [https://stemedhub.org/groups/cleerhub/wiki/issue:1055 : Back to 2006 Winter Issue Vol. 2, No. 1] [[Include(issues:footer)]]