is the systematic collection and analysis of information
to improve student learning and program viability.
According to noted author, Thomas A. Angelo, assessment
involves "...making our expectations explicit
and public; setting appropriate criteria and high
expectations for learning quality; systematically
gathering, analyzing, and interpreting evidence to
determine how well performance matches those expectations
and standards; and using the resulting information
to document, explain, and improve performance..." ( Reassessing [and Defining] Assessment. AAHE
Bulletin , November 1995, Volume 48 Number 3).
assessment measure is a data source or tool used to
indicate outcome attainment. While it is desirable
to use multiple assessment measures over different
points in time, each outcome must have at least one
assessment measure . Assessment measures for programmatic
outcomes may include Productivity Reports (PAS), Factbook
publication, OIE survey data (e.g., ACT Student
Opinion Survey; Graduate, Employer, and Transfer Student
Surveys), and other routine data reports posted on
the OIE webpage (e.g., headcounts, FTES, graduates).
Assessment measures for student learning outcomes
may include direct and/or indirect measures.
steps” is a metaphor used to describe the assessment
reporting process. As with a child's growth, assessment
is a continuous process and a gradual sequencing from
one stage of development to another.
The “continuous phase” represents the second year a program
embarks on the SOARR process (2006-07). Consistent with the “baby steps” approach to implementation, programs are required
to continue documenting the 8 report elements described in the "Planning Phase". Additionally, program stakeholders are to (1) describe the implement any enhancement strategies; and (2) review
and revise (if necessary) the programmatic and student
learning outcomes, and assessment measures.
direct assessment measure is a data source or tool
used to indicate the attainment of a student learning
outcome by directly observing student demonstration
of their knowledge or skill. Examples of direct measures
include: capstone course evaluation; classroom tests – teacher generated, standardized, industry certification
test, oral exams, pop quizzes, pre-post testing; competency-based
measures such as performance appraisals & internships,
simulations and role playing; external reports such
as judging of portfolios by industry professionals;
and other direct measures such as teacher observations,
class participation, research projects, thesis evaluations,
portfolios, case studies, and reflection papers.
enhancement is a planned activity or strategy aimed
at improving the degree to which an outcome is attained.
The articulation and implementation of these strategies
are required for each outcome where a minimum standard
has not been achieved, but are optional in cases where
the minimum standard has been met.
The “implementation phase” represents the second year
a program embarks on the Program Review and Outcomes
Assessment reporting process. Consistent with the
“baby steps” approach to implementation, programs
are required to establish minimum standards, collect
and analyze data, report highlights, and identify
enhancement strategies (if failure to meet a minimum
indirect measure differs from a direct measure in
that an indirect measure indicates ones opinion or
perceived attainment of a student learning outcome.
A direct measure assesses students’ demonstrated understanding
or skill application, while an indirect measure assesses
the perceived level of understanding or skill. Examples
of indirect measures include: self-reported data such
as survey/perception data from graduates, students,
parents, employers; exit interviews; and curriculum
and syllabus analysis.
M & M Rule:
writing intended outcomes and identifying assessment
measures, the data collection process must be Manageable
yet produce Meaningful results.
The “planning phase” represents the first year a program
embarks on the SOARR
process (2005-06). Consistent with the “baby steps” approach to implementation, programs are required
to (1) complete a curriculum map which lists all student learning outcomes and the courses in which they are taught; (2) describe the planning processes and activities that led to the currently taught curriculum; (3) perform trend analysis of the programmatic data such as new students, retention rates, and graduates; (4) target at least three student learning outcomes
deemed critical to the program; (5) perform trend analysis of enrollment and course success rates for each of the targeted learning outcomes; (6) identify direct, course-embedded measures to be collected and tracked each semester; (7) describe the curricular and program enhancements that were a result of the processes, activities, and analysis described in the report; and (8) develop an action plan with strategies to address areas for improvement.
programmatic outcome is a goal that is non-student
in nature and indicates a program’s viability and
effectiveness. These outcomes may involve faculty
productivity data (e.g., utilized and generated FTEF,
total course credit hours); enrollment growth (e.g.,
headcounts, FTEs); grade distribution data; graduate,
new student, and retention rates; perceptual survey
acronym that stands for: Specific,
Relevant, and Timely.
Coined by the Center for Performance Assessment, the
acronym represents a guiding standard for writing
student learning outcome is an essential knowledge
or skill that students demonstrate as a result of
completing a program or course of study. Furthermore,
it can be viewed as a critical knowledge or skill
demanded by business, industry, or four-year institutions
of higher education.