Thursday, July 10, 2008

Assessing student learning for curriculum revision

I am currently working on a project where we are gathering data assessing student learning from a number of sources: graduating seniors, alumni, and current students' internship supervisors. There are many challenging aspects of collecting these data--self-report satisfaction data is suspect (i.e., happiness doesn't equal learning), retrospective data collection can be problematic due to the passage of time between experiences and recall, and supervisors' assessment can be very uneven, with some being hard graders and others grading too easily. Further, open and closed-ended questions in a survey instrument allows us to ask our major questions, but it does not allow for more nuanced feedback or unexpected findings.

That said, we hope that the combination of methods helps to counteract the problems of each of the individual methods. Current students can tell us what they think about the courses when they are fresh in their minds, while alumni have more perspective on what they got and what was missing. Supervisors can provide a more objective evaluation of the students' capacity, strengths, and weaknesses.


I am energized and yet somewhat frustrated by the process. My energy comes from learning from the data. It is especially exciting to me when we start to see patterns across the different data. For example, criticisms of one aspect of the curriculum expressed by current students are shared by supervisors and alumni. These critiques provide a basis and an outline for curriculum and pedagogical revisions--which is a lot better than just relying on my own gut instincts, consensus among faculty on the curriculum committee, and/or anecdotal reports from a few complaining students.


The data become even more important when they highlight an issue we had never considered a problem--and even suggest a solution.





There is a part of me that looks forward to meeting with the curriculum committee in the Fall and sharing the data. Some of the issues I have noted in the past are now supported by the data. I hope these findings will lead to some real changes. (I am reminded of a button I used to have that read, "We have charts and graphs to back us up, so fuck off!") Yet, perhaps because I am a qualitative researcher at heart, the survey method is still a little unsatisfying.

With that in mind, I would send readers to look at the interview method used by Michael Arnzen from Pedablog. He asks a series of set open-ended questions to graduates, asking them to evaluate their learning and the teaching processes they experienced in their classes. Even better, the answers get posted to a blog, where other students, alumni, and faculty can comment. The student who answered the initial questions can also respond to the comments, allowing for an interactive feedback loop that allows the respondent to clarify their responses or change them upon reflection, and moves the conversation forward.



Perhaps some blending of the two methods--surveys and focus groups or interviews--would really offer the best insights into what the curriculum does well and where it falls short.

Okay, so I acknowledge that this makes me an academic geek. I readily admit it. Hopefully, it helps our program grow and improve.

1 comment:

Doctor Pion said...

Most interesting. I don't get over here often enough, so its good that your blog about DD's question du jour gave me a chance to read this.

I like that you are looking at alumni, and your outline of potential flaws in the process gave me a lot to think about concerning my plans for doing some of this next year.

The idea of open-ended responses, rather than quantifiable survey foils, appeals to me. I want to know what we could have done to help them be better prepared for what came next, and what they would tell a freshman that they wished they had done.