Module Eleven: Assessment Results

The third step in the Assessment Process is to gather the assessment results. The fourth step is to analyze and interpret the results.

 Because these two steps are closely intertwined, we will discuss them together.

Gather Assessment Results
Determine Student Learning Outcomes Now you are ready to administer your assessment. Using the methods and instruments you identified, you should now be able to collect quantifiable and qualitative data from the various areas. During the assessment process, you should try to standardize the conditions as much as possible and maintain quality control to guide your data collection.

Key definitions for gathering assessment results:

Cultural appropriateness – Ensuring that data is not based on stereotypes or assumptions about culture

Interrater reliability – A measure of rater consistency. Measures collected by different raters should be consistent

Mean – The arithmetic average of all scores

Median – The midpoint of all scores

Mode – The most frequently occurring score

Quality controlVerifying accuracy of data

Reliability – A measure of consistency. With a reliable instrument, you should get similar results each time you use it

ValidityThe instrument or assessment measured what was intended to be measured

If you are not comfortable with statistical analysis, ask for help. For your data to be valuable, you want to make justifiable conclusions.  Provide quantifiable data indicating the number of students or papers assessed. Does the number (n) represent a sampling of all students or only students in the majors?

Analyze and Interpret Results
Determine Student Learning Outcomes After you have collected data, you will now want to analyze your results to determine whether your students met the criteria for success. Assessment results should include sufficient detail and analysis of results and should identify strengths and weaknesses of student performance.
Consider these steps as you move through this part of the process.
Step one

Review your outcomes and your criteria for success. Did students miss, meet, or exceed expectations? 

Step Two

Focus only on the data related to the learning outcome. Consider the response rate: are the sample size and type sufficient for analysis and interpretation?

Step Three

Review previous assessment cycles for longitudinal evidence. What were your findings and recommendations? If you made changes, were they successful on subsequent implementation?

Step Four

Interpret and summarize the data. Consider these questions from J. Spurlin (2008):

  • Does the students’ performance show that they meet your metric for the outcome?
  • Is the data consistent? (typically within 3-5%)
  • Does the percentage of the data increase or decrease over time? (typically an increase or decrease of more than 3-5% is considered “change”)
  • How does this evidence compare to all … programs at your institution, if available? (typically more than 5–8% difference is considered ‘‘different’’)
  • How do the modifications or changes to the program or curriculum since the last assessment cycle impact the interpretation of this data?
  • Is part or all of the outcome met? Is one dimension of the outcome an area of strength or concern?
  • Is there a positive or negative trend over the past few years?
  • Does one assessment method provide better quality data? How would you compare the results from the senior design course with assessment within a junior-level course?
  • Are there areas where their performance is adequate but where you would like to see a higher level of performance?

Source: Adapted from copyright material from: Spurlin, J. (2008). Assessment methods used in undergraduate program assessment. In J. Spurlin, S. Rajala, & J. Lavelle (eds.) Designing Better Engineering Education Through Assessment: A Practical Resource for Faculty and Department Chairs on Using Assessment and ABET Criteria to Improve Student Learning. Sterling, Va: Stylus Publishing.

Previous Overview Next