Professors: Course evaluations necessary, but flawed

Across departments at the University of Minnesota, professors say course evaluations aren’t as useful as they could be and they don’t impact their classrooms.

Raju Chaduvula

Professors are supposed to use course evaluation results to see how they measure up to students’ needs, but some say the information isn’t useful.

While the required course evaluations administered each semester at the University of Minnesota are intended to help professors and administrators better understand what students need, some say the information is shallow and does not have a significant impact on the way a course is taught in the future.

Some professors ask students to fill out mid-semester evaluations and use feedback to adjust the course before the end of the semester.

Cosette Creamer, a law professor and assistant professor of political science in her first semester at the University, said when she taught at Harvard College, she administered her own mid-semester reviews to “see what was working and what was not … to make the course more adaptive to the students.”

The impact of evaluations is dampened when faculty don’t see them until the end of the semester, after classes are done, said Carlson School of Management senior lecturer Rand Park.

“By that time I’m already preparing for the next semester and it’s sometimes a little too late,” he said.

Park said he receives more information about his teaching by talking with students directly.

Professors give out the evaluations during the last week of class and the results are tallied up and sent to professors and department heads, said Stephanie Klein, assistant director for the Office of Measurement Services.

The course evaluation is made public for students to help make course decisions, Klein said.

The evaluations are mostly for the benefit of successive students to judge a course before taking it and not for the professor, Creamer said.

Professors receive the metric data — where the students rate a professor’s performance from one to five — and written answers filled out by students, she said.

“It’s all about metrics and it does not seem very useful to many of us,” said Timothy Brennan, professor of Cultural Studies and Comparative Literature.

Gary Oehlert, a statistics professor and associate dean for undergraduate education in the College of Liberal Arts, said he finds the written answers much more useful, but the questions are generic in order to fit all departments and hardly offer good insights.

Park said while course evaluations are flawed, they are necessarily consistent and can be applied across the University.

“More customized evaluations might not make it more applicable for cross-departmental information,” Park said.

Department heads use the evaluations to determine which faculty members should receive promotions or tenure, Klein said.

The departments get the professors’ metric scores as well as the written comments, Oehlert said.

“[Course evaluations] provide some information, but very limited information,” he said, and they are always taken with a grain of salt.