Outcomes assessment is a system for informing program-level decisions related to student learning with the best available evidence. Outcomes assessment is a systematic process for improvement, not simply a system for measurement.
Outcomes assessment involves systematic processes for
Specifically, outcomes assessment is a department's closed-loop feedback system for better achieving program-level curricular goals. Outcomes assessment relies on the information supplied by assessment measures as the basis for adjustments to the system, much like any closed-loop feedback system relies on the information supplied by sensors as the basis for adjustments to the system. For example, a temperature control system relies on information supplied by a thermocouple and a speed controller relies on information supplied by a tachometer, yet information from sensors is only part of each closed-loop control system. In the same way, outcomes assessment (the closed-loop feedback system) relies on information supplied by assessment measures (the sensors) to drive a system for improved achievement of goals.
A department conducts outcomes assessment when it devises and implements a system for:
Outcomes assessment improves student learning by increasing the effectiveness of decisions pertaining to student learning. Specifically, outcomes assessment increases the likelihood of improvement in student learning because decisions for change are not based on "educational fad(s) or on vague notions about what might be effective" (Walvoord, 2004, p. 6).
Astin (1993, p. 130) described the benefits of assessment by analogy to the benefits of a dancer using a mirror. Just as a dancer uses a mirror to "systematic(ally)...critique and correct her own performance" (Walvoord, 2004, p. 6), assessment can help a department systematically critique and correct the ways they facilitate and support student learning.
Is assessment complicated? Not necessarily. The spirit of outcomes assessment is straightforward - making program-level decisions related to student learning based on the best available evidence. Complicated assessment processes can divert attention from this simple vision, squandering resources on activities that do not lead to improvement in student outcomes.
Doing assessment is not necessarily complicated. An ABET Evaluator Team Chair, Pat Walsh (Managing Consultant for Human Capital at IBM Business Consulting Services) summed it up in three, simple questions:
Walsh's three questions are echoed in Walvoord's (2004) "three steps of assessment" (p. 3).
It depends on how the information is used.
Grading is an integral part of outcomes assessment if the information gained while grading is "systematically used for department-level decision making" (Walvoord, 2004, p. 6). Grading is not part of outcomes assessment if it is performed exclusively for assigning grades. Even if the information gained while grading is used for improving teaching in that course, it is not part of assessment unless the information is also used to inform program-level decision making. Grading is focused on strengths and weaknesses in each individual student's learning for use by each student. Scoring for assessment is focused on patterns of strengths and weaknesses in a group of students for use by program-level decision makers. When grading is used for assessment, a second process of identifying patterns among students is necessary.
According to Walvoord (2004), the process of grading can be part of outcomes assessment if:
Ideally, assessment is incorporated into daily practice. In that way assessment will not be perceived as an extra burden. For example, as much as possible, integrate assessment into the curriculum by using scoring rubrics for student assignments. By using scoring rubrics, faculty can gather information for program-level assessment while they do their grading. This approach minimizes faculty workload and makes assessment activities an integral, logical part of the students' education. For example, a program might
For outcomes assessment, program refers to an entire program of study for a student, typically an entire degree program, such as a B.S.E. in Civil Engineering or a PhD in Chemical Engineering. A program is more than a collection of individual courses and experiences that are required for the degree; a program is the integration of all the component parts, including general education courses, cognate courses, advising, and co-curricular experiences.
General assessment literature uses 'outcomes' and 'objectives' interchangeably. However, ABET clearly defines the terms for engineering assessment:
Direct and indirect assessments are often called direct and indirect measures. (ABET uses 'measures' and 'methods'.) Direct measures of a learning outcome "reveal what students know and can do [while] indirect measures … suggest why performance was above or below expectations and what might be done to improve the processes of education" (Banta, 2004, p. 5). Using a combination of direct and indirect measures is advisable, because they offer complementary information. However, assessment plans must include direct measures in order to supply credible information for decision-making (Palomba & Banta, 1999).
The term "direct measure" focuses on the type of evidence produced. Other common terms for direct measures focus on the type of student activity that produces direct evidence (e.g., performance assessment and authentic assessment) or on the integration of such assessment with instruction (e.g., embedded assessment).
Rubrics are used to measure student learning for scoring and grading. Rubrics are systematic scoring methods that use pre-determined criteria. Rubrics help instructors assess student work more objectively and consistently.
There are two types of rubrics: holistic and analytical. In a holistic rubric, the entire performance is evaluated and scored as a whole. In an analytic rubric, the performance is evaluated and scored on several distinct criteria. Analytic rubrics are common for engineering assignments.
Further information on why rubrics are important, more example rubrics, and how to create a rubric may be helpful.
Choosing measures is best considered part of an overall assessment plan, so this question is addressed in the handbook section on "creating assessment plans".
Accreditation Board for Engineering and Technology (ABET). (2006). 2007-2008 Criteria for Accrediting Engineering Programs. Retrieved January 5, 2007 from http://www.abet.org/forms.shtml
Astin, A. W. (1993). What Matters in College: Four Critical Years Revisited. San Francisco: Jossey-Bass.
Banta, T. W. (2004). Introduction: What are some hallmarks of effective practice in assessment? In T. W. Banta (Ed.) Hallmarks of Effective Outcomes Assessment. San Francisco: Jossey-Bass.
Palomba, C. A. & Banta, T. W. (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass.
Walvoord, B. E. (2004). Assessment Clear and Simple. San Francisco: Jossey-Bass.
Assessment Handbook >