Frequently Asked Questions (FAQ's)

Frequently Asked Questions (FAQ's)

What is outcomes assessment?

Outcomes assessment is a system for informing program-level decisions related to student learning with the best available evidence. Outcomes assessment is a systematic process for improvement, not simply a system for measurement.

Outcomes assessment involves systematic processes for

    • answering the question "How well does this program achieve its educational outcomes and, ultimately, its educational objectives?" and

    • using that information to inform decision-making.

Specifically, outcomes assessment is a department's closed-loop feedback system for better achieving program-level curricular goals. Outcomes assessment relies on the information supplied by assessment measures as the basis for adjustments to the system, much like any closed-loop feedback system relies on the information supplied by sensors as the basis for adjustments to the system. For example, a temperature control system relies on information supplied by a thermocouple and a speed controller relies on information supplied by a tachometer, yet information from sensors is only part of each closed-loop control system. In the same way, outcomes assessment (the closed-loop feedback system) relies on information supplied by assessment measures (the sensors) to drive a system for improved achievement of goals.

A department conducts outcomes assessment when it devises and implements a system for:

    • "collecting credible evidence of the progress they are making in attaining their goals for student growth, development, and success and

    • using that evidence to improve their academic programs and student services continuously" (Banta, 2004, p. 1).

Evidence

For outcomes assessment, the term evidence means "indicators that will be useful for decision making", not "proof" (Walvoord, 2004, p. 2). Evidence is not simply a pile of data. At its best, evidence answers the burning questions of the department's faculty and staff. When the evidence has credibility with the people who rely on it, the result can be "'a culture of evidence' - an environment in which important decisions are based on the study of relevant data" (Banta, 2004, p. 6). For example, "decisions about curriculum, pedagogy, staffing, advising, and student support [can be based] upon the best possible data about student learning and the factors that affect it" (Walvoord, 2004, p. 2).

What are the benefits of outcomes assessment?

Outcomes assessment improves student learning by increasing the effectiveness of decisions pertaining to student learning. Specifically, outcomes assessment increases the likelihood of improvement in student learning because decisions for change are not based on "educational fad(s) or on vague notions about what might be effective" (Walvoord, 2004, p. 6).

Astin (1993, p. 130) described the benefits of assessment by analogy to the benefits of a dancer using a mirror. Just as a dancer uses a mirror to "systematic(ally)...critique and correct her own performance" (Walvoord, 2004, p. 6), assessment can help a department systematically critique and correct the ways they facilitate and support student learning.

What does "doing assessment" look like?

Is assessment complicated? Not necessarily. The spirit of outcomes assessment is straightforward - making program-level decisions related to student learning based on the best available evidence. Complicated assessment processes can divert attention from this simple vision, squandering resources on activities that do not lead to improvement in student outcomes.

Doing assessment is not necessarily complicated. An ABET Evaluator Team Chair, Pat Walsh (Managing Consultant for Human Capital at IBM Business Consulting Services) summed it up in three, simple questions:

"Do you have goals?"

"Do you measure 'em?" and

"Do you improve [based on your measurements]?"

Walsh's three questions are echoed in Walvoord's (2004) "three steps of assessment" (p. 3).

We're already grading. Isn't that assessment?

It depends on how the information is used.

Grading is an integral part of outcomes assessment if the information gained while grading is "systematically used for department-level decision making" (Walvoord, 2004, p. 6). Grading is not part of outcomes assessment if it is performed exclusively for assigning grades. Even if the information gained while grading is used for improving teaching in that course, it is not part of assessment unless the information is also used to inform program-level decision making. Grading is focused on strengths and weaknesses in each individual student's learning for use by each student. Scoring for assessment is focused on patterns of strengths and weaknesses in a group of students for use by program-level decision makers. When grading is used for assessment, a second process of identifying patterns among students is necessary.

According to Walvoord (2004), the process of grading can be part of outcomes assessment if:

    • the student work (or performance) to be graded actually measures one or more intended learning outcomes

    • the evaluation criteria are written "in sufficient detail to identify students' strengths and weaknesses" (p. 14) and

    • information about patterns in student strengths and weaknesses is used systematically to inform program-level decisions.

How will faculty and students have time for assessment?

Ideally, assessment is incorporated into daily practice. In that way assessment will not be perceived as an extra burden. For example, as much as possible, integrate assessment into the curriculum by using scoring rubrics for student assignments. By using scoring rubrics, faculty can gather information for program-level assessment while they do their grading. This approach minimizes faculty workload and makes assessment activities an integral, logical part of the students' education. For example, a program might

    • design internship evaluations so that they provide useful information about student performance on key learning outcomes,

    • incorporate senior projects or exit exams into a capstone course, or

    • pre-test students in an introductory course.

What is a program?

For outcomes assessment, program refers to an entire program of study for a student, typically an entire degree program, such as a B.S.E. in Civil Engineering or a PhD in Chemical Engineering. A program is more than a collection of individual courses and experiences that are required for the degree; a program is the integration of all the component parts, including general education courses, cognate courses, advising, and co-curricular experiences.

What is the difference between outcomes and objectives?

General assessment literature uses 'outcomes' and 'objectives' interchangeably. However, ABET clearly defines the terms for engineering assessment:

    • "Program educational objectives are broad statements that describe the career and professional accomplishments that the program is preparing graduates to achieve" (ABET, 2006, p. 1).

    • "Program outcomes are statements that describe what students are expected to know and be able to do by the time of graduation. These relate to the skills, knowledge, and behaviors that students acquire in their matriculation through the program" (ABET, 2006, p. 1).

What is the difference between direct and indirect assessment?

Direct and indirect assessments are often called direct and indirect measures. (ABET uses 'measures' and 'methods'.) Direct measures of a learning outcome "reveal what students know and can do [while] indirect measures … suggest why performance was above or below expectations and what might be done to improve the processes of education" (Banta, 2004, p. 5). Using a combination of direct and indirect measures is advisable, because they offer complementary information. However, assessment plans must include direct measures in order to supply credible information for decision-making (Palomba & Banta, 1999).

Direct measures

A direct measure of a learning outcome allows faculty to directly observe a student's demonstration of the knowledge, skills, abilities, and values that are relevant to the learning outcome (Palomba & Banta, 1999). Examples of direct measures are projects, papers, open-ended exam questions, presentations, performances, and portfolios. Note that most of these examples are already common components of engineering courses. If such assignments are crafted properly, grading them can be an integral part of outcomes assessment.

The term "direct measure" focuses on the type of evidence produced. Other common terms for direct measures focus on the type of student activity that produces direct evidence (e.g., performance assessment and authentic assessment) or on the integration of such assessment with instruction (e.g., embedded assessment).

Indirect measures

Examples of indirect measures are self-assessments, surveys, exit interviews, and focus groups which gather perceptions of learning, opinions about learning or reflections on learning rather than direct demonstrations of the results of learning. By definition, an indirect measure of a learning outcome requires faculty to infer a student's knowledge, skills, abilities, and values from a measure that does not reveal any direct evidence of the learning outcome (Palomba & Banta, 1999).

How can I use rubrics to score and grade student work?

Rubrics are used to measure student learning for scoring and grading. Rubrics are systematic scoring methods that use pre-determined criteria. Rubrics help instructors assess student work more objectively and consistently.

There are two types of rubrics: holistic and analytical. In a holistic rubric, the entire performance is evaluated and scored as a whole. In an analytic rubric, the performance is evaluated and scored on several distinct criteria. Analytic rubrics are common for engineering assignments.

Further information on why rubrics are important, more example rubrics, and how to create a rubric may be helpful.

How can we choose appropriate measures for a specific outcome?

Choosing measures is best considered part of an overall assessment plan, so this question is addressed in the handbook section on "creating assessment plans".

References

Accreditation Board for Engineering and Technology (ABET). (2006). 2007-2008 Criteria for Accrediting Engineering Programs. Retrieved January 5, 2007 from http://www.abet.org/forms.shtml

Astin, A. W. (1993). What Matters in College: Four Critical Years Revisited. San Francisco: Jossey-Bass.

Banta, T. W. (2004). Introduction: What are some hallmarks of effective practice in assessment? In T. W. Banta (Ed.) Hallmarks of Effective Outcomes Assessment. San Francisco: Jossey-Bass.

Palomba, C. A. & Banta, T. W. (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass.

Walvoord, B. E. (2004). Assessment Clear and Simple. San Francisco: Jossey-Bass.