Creating Assessment Plans

Creating Assessment Plans

Purpose for Creating an Assessment Plan

The purpose of outcomes assessment is to improve student learning. Outcomes assessment achieves improvement by systematically informing program-level decisions related to student learning with the best available evidence. A program's assessment plan defines the program's unique system for informing learning-related decisions with evidence. Specifically, an assessment plan defines systematic processes for

    • answering the question "How well does this program achieve its educational outcomes and, ultimately, its educational objectives?" and

    • using that information to inform decision-making for improvement.

Thus, outcomes assessment involves three components: developing goals, collecting credible evidence, and using the evidence for improvement. An outcomes assessment plan must involve all three of these activities. Note that an outcomes assessment plan defines a systematic process for improvement, not simply a system for measurement.

An analogy to engineering practice

An outcomes assessment plan defines a department's closed-loop feedback system for better achieving program-level curricular goals. Outcomes assessment relies on the information supplied by assessment measures as the basis for adjustments to the system, much like any closed-loop feedback system relies on the information supplied by sensors as the basis for adjustments to the system. For example, a temperature control system relies on information supplied by a thermocouple and a speed controller relies on information supplied by a tachometer, yet information from sensors is only part of each closed-loop control system. In the same way, outcomes assessment (the closed-loop feedback system) relies on information supplied by assessment measures (the sensors) to drive a system for improved achievement of goals.

Steps in Creating an Assessment Plan

The following guidelines draw on the ideas of Banta (2004), Walvoord (2004), Rogers (2004), Rogers & Sando (1996), and Miller and Olds (1999). The three components to an assessment plan are plans for developing goals, collecting credible evidence, and using the evidence for improvement. Plans are written documents. Plans include assignments of responsibility and schedules for action that include when and how often that action will be taken. Note that assessment plans will only lead to improvement if they are faithfully implemented. Thus, an effective assessment plan is feasible, both initially and to continue in an ongoing manner.

Developing Goals

Analogous to engineers developing specifications for the outputs of a system

1. Develop program objectives

"Program educational objectives are broad statements that describe the career and professional accomplishments that the program is preparing graduates to achieve" (ABET, 2004, p. 1). Develop the objectives with input from all of the program's constituencies and set a schedule for reviewing and updating the objectives. Be sure that the objectives fully reflect the mission of the entire institution, not simply your undergraduate program.

2. Develop program-level learning outcomes

"Program outcomes are statements that describe what students are expected to know and be able to do by the time of graduation. These relate to the skills, knowledge, and behaviors that students acquire in their matriculation through the program" (ABET, 2006, p. 1). Outcomes should be derived from the objectives. For engineering programs, ABET has specified (in Engineering Criteria, Criterion 3, a-k) a minimum set of learning outcomes that an engineering program must have for accreditation purposes. Adapt these to your program's objectives. Develop the outcomes with input from all constituencies for the program and set a schedule for reviewing and updating your program's outcomes.

3. Develop measurable performance criteria for each outcome

A performance criterion is a specific statement that describes a measurable aspect of performance that is required to meet the corresponding outcome. Each performance criterion must also specifically describe an acceptable level of measurable performance. For performance criteria that are not directly assessable, indirect indicators of the performance can be identified. There should be a limited number of performance criteria for each outcome. Set a schedule for reviewing and updating the performance criteria.

Developing measurable performance criteria is a critical step, yet because it is so time consuming, many programs neglect it. Examples of translating ABET's outcomes "a-k" into measurable sub-skills (or attributes) can spur faculty creativity to write their own. Each measurable sub-skill uses action verbs. Sub-skills are organized by levels of students' mental functioning.

4. Align the curriculum and supporting practices (such as advising) with the learning outcomes

Make a table or matrix showing all the learning outcomes on one axis and all the required learning experiences in the program (such as courses, advising, co-ops, etc.) on the other axis. In the cells, note where skills for each outcome are taught. An example matrix may be helpful.

Be sure to devise specific learning experiences that will achieve each measurable performance criterion, each learning outcome, and, ultimately, each objective. For example, adjust the curriculum and advising to help students achieve the performances you are aiming for.

Collecting Credible Evidence

Analogous to engineers analyzing measurements of critical dimensions in manufacturing processes

5. Specify assessment methods for measuring each performance criterion

5.1 Collecting evidence is about answering questions, such as sub-questions that help answer the central question "How well does this program achieve its educational outcomes and, ultimately, its educational objectives?" Specify your program's questions. Specify corresponding assessment methods that will provide such information to program-level decision-makers about how well each performance criterion is being met. Two sections of this handbook support faculty with this step in their assessment planning: conducting direct assessments and conducting indirect assessments. These handbook sections

    • describe the types of learning that each assessment method can measure

    • offer ideas for how to implement each assessment method in an engineering course or program

    • offer examples of each assessment method

    • demonstrate how such an assessment can be scored to inform program-level decisions.

Direct assessments (or direct measures) of a learning outcome "reveal what students know and can do [while] indirect measures … suggest why performance was above or below expectations and what might be done to improve the processes of education" (Banta, 2004, p. 5). Using a combination of direct and indirect measures is advisable, because they offer complementary information. However, assessment plans must include direct measures in order to supply credible information for decision-making (Palomba & Banta, 1999).

The central issues to remember while selecting assessments are parallel to the central issues of research (Pike, 2002). Pike notes the following issues to consider while selecting assessments:

    • Asking good questions about student learning, questions that

        • Are interesting and important

        • Are tightly linked to the mission and objectives of the institution

        • Are about the processes for facilitating student learning in the program

        • Are about the outcomes of student learning in the program

    • Identifying appropriate approaches (or research designs) for answering your questions.

    • Resolving these issues requires some knowledge of social science research methods. For example:

        • For "how much" questions (such as level of satisfaction or amount of change) quantitative methods (such as surveys and exams) may be most appropriate.

        • For "how" questions (such as how students' experiences affect their learning outcomes), qualitative methods (such as interviews and focus groups) may be most appropriate.

        • Comparison groups (such as college-wide data at U-M's CoE) may be a feasible alternative to experimental research designs with random assignment of participants. Refer to Pike (2002) for more issues and further references.

    • Using appropriate measures for managing measurement error

    • Pike's three guiding questions for managing measurement error are:

        • Does the measure address the question being asked? (For example, to determine if students have attained oral presentation skills for technical information, does the assessment involve students presenting technical information orally?)

        • Are the scores sensitive to students' experience in the educational program? ("Ideally, an assessment measure will be strongly related to educational experiences and unrelated to noneducational factors" (Pike, 2002, p. 140) such as gender, ethnicity, entering ability, and how the assessment was administered.

        • Is the measure reliable? (See Pike (2002), p. 139-140 for details.)

    • Selecting representative participants

        • How many participants (e.g., students or alumni or employers) should be selected?

        • How should participants be selected?

5.2 Establish a feasible schedule for conducting the assessments and responsibilities for administering the assessments.

5.3 Establish a feasible schedule for analyzing and reporting on each assessment method. Assign responsibilities.

5.4 Select or develop the specified assessments (surveys, scoring rubrics, focus group questions, etc.).

6. Specify evaluation methods

Evaluation is using the evidence from the assessment measures to determine how well the goals are met. Establish a process for evaluation:

    • Who will assemble the various assessment reports? On what schedule (when and how often)?

    • Who will use the assessment data to evaluate, criterion-by-criterion, how well the program is achieving the learning outcomes? On what schedule?

    • How will the data be used to inform hypotheses for the causes of student weaknesses?

    • In what format will the evaluation be reported? For example, will criteria that have been met be listed as successes to celebrate, while criteria that have not been met be listed as opportunities for improvement?

Using the evidence for improvement

Analogous to engineers using evidence of gaps between actual products and specifications to take corrective actions for improving the manufacturing process

7. Determine feedback channels for improvement

7.1 Determine how assessment information will inform program-level decisions pertaining to student learning.

    • Who will receive the information? How will they know what the information is for?

    • In what format will each audience receive it? Is that an appropriate format for that audience?

    • On what schedule will they receive it? Is that schedule frequent enough and timely for decisions pertaining to student learning?

    • How will recipients of the data know how to interpret it?

    • How will the presenters know the information well enough to field questions?

    • How will these decision processes be documented?

7.2 Determine how action will be taken in response to assessment information.

    • When the evidence indicates that a criterion or objective was achieved because of the program, will that be celebrated as a success?

    • When the evidence indicates that there is an opportunity for improvement,

        • How will responsibility be established for devising corrective action?

        • How will a schedule be determined for devising the corrective action?

        • Who will determine if the corrective action plan is feasible and should be implemented?

        • Who will plan how to assess if the corrective action has been effective?

    • When a corrective action plan is adopted,

        • Who will be responsible for its implementation? On what schedule?

        • Who will determine if the corrective action did, indeed, lead to improvement?

    • How will this be documented?

7.3 Establish procedures for improving the assessment plan itself.

    • Who will review the program goals and the procedures for outcomes assessment? On what schedule?

    • What types of evidence will they use (e.g., utility, feasibility, propriety, and accuracy)?

    • Who will receive the feedback?

    • Who will generate and implement the corrective actions?

    • How will this be documented?

Other viewpoints on creating an assessment plan

This concludes a basic outline of steps for creating an assessment plan, an outline tailored for engineering programs. Here are other websites with other viewpoints on program-level assessment plans:

Example Assessment Plans in Engineering Programs

Assessment plan specifying educational objectives, learning outcomes, and assessments for each outcome.

Assessment plan including (in the terminology above) objectives, outcomes, and performance criteria, teaching approaches for each criterion, assessments for each criterion, and feedback channels. Note: this plan's terminology conflicts with ABET's and what is used above.

Example process for creating an assessment plan (New Jersey Institute of Technology)

Assistance for Creating an Assessment Plan

Best Assessment Processes Symposium

A working symposium for educators in engineering, engineering technology, or computer science to learn about the best assessment methods and how to apply them for program improvement and for accreditation.

A Workshop for Engineering Educators

Developing Assessment Plans that Work.

References

Accreditation Board for Engineering and Technology (ABET). (2006). 2007-2008 Criteria for Accrediting Engineering Programs. Retrieved January 7, 2007 from http://www.abet.org/forms.shtml

Banta, T. W. (2004). Introduction: What are some hallmarks of effective practice in assessment? In T. W. Banta (Ed.) Hallmarks of Effective Outcomes Assessment. San Francisco: Jossey-Bass.

Miller, R. L. & Olds, B. M. (1999). Program Assessment and Evaluation Matrix. Downloaded on May 5, 2005 from http://www.mines.edu/fs_home/rlmiller/matrix.pdf.

Palomba, C. A. & Banta, T. W. (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass.

Pike, G. R. (2002). Measurement issues in outcomes assessment. In Trudy Banta & Associates Building a Scholarship of Assessment. San Francisco: Jossey-Bass, p. 131-147.

Rogers, G. M. (2004). How are we doing?: Assessment Tips with Gloria Rogers. Communications Link. Baltimore: ABET, Spring 2004, p. 4-5.

Rogers, G. M. & Sando J. K. (1996) Stepping Ahead: An Assessment Plan Development Guide. Terre-Haute, IN: Rose-Hulman Institute of Technology.

Walvoord, B. E. (2004). Assessment Clear and Simple. San Francisco: Jossey-Bass.