In the Friday, May 16 news story “Senate changes romance policy,” reporter Tom Lopez summarizes remarks I made to the University Senate at its Thursday meeting regarding the proposal to form a committee to develop scales that would facilitate student selection of future courses and instructors. Because of space and time limits my remarks to the Senate needed to be brief and the summary of them in the Daily was briefer still. Via this letter I hope to elaborate and clarify. Evaluation and encouragement of good teaching is an important issue at the University of Minnesota and its discussion presents an opportunity for leadership.
Student evaluations of teaching constitute important feedback for the teacher, the courses and the evaluators. Students are entitled to the best information available to aid their course selection. However, in focusing only upon selecting new scales to be administered concurrently (data generation) rather than aiding students in interpreting information “correctly” (data analysis), the proposal is likely to have similar unintended side effects as the present system. That system of course evaluations (which the proposal resembles) has, in my opinion, contributed to grade inflation, the “dumbing down” of courses and exams, game-playing (e.g., grading midterms more leniently than finals), exits from the University of those concerned with theory over application and to a general emphasis on form over substance.
The problem is not with the student ratings per se, but the way in which they are too often summarized and used. The Senate proposal is far too modest and does not seem to deal at all with implementation issues. Let me suggest what they might involve, assuming student evaluations were to be put on the Web.
1. Illustrative student comments might be reported to elaborate the numerical scales. This would involve editing by those with no vested interest in the outcome and only those statements judged most descriptive and substantive would be reported. More extraneous comments (say, about faculty dress or mannerisms) might be downplayed.
2. The scale values might be reported graphically, with the median or mode and the range shown. Median and mode are less affected by extreme views than is the mean and a range is easier to understand by those not versed in statistics (i.e., most students and faculty). Students who provided only “patterned responses” (e.g., ones or sevens to every question or zigzag) could have their data omitted. The range would reveal the extent of divergent views in the evaluations.
3. Instructors might be given a week to respond to the student data that is to be reported. And both the student evaluations/comments and the instructor’s response reported together. Some instructors may accept critical comments and make changes. Students who benefit most from feedback are those who will take future courses and this information might predict the future more accurately. Whether and when the instructor expects to be teaching the course again might also be reported.
4. Numeric results might also be reported for the professor and course over time (to indicate possible patterns of change) or be compared with other courses the professor has taught. Results for one professor might be compared with that of other professors in the recent past teaching a similar course. Some courses are simply less exciting than others and students would be aided by comparison of ratings with specific rather than general norms.
5. Student evaluators might be asked more about what they expected from a course and what they both liked and were disappointed in. Evaluation results might usefully be cross-tabulated with student grade expectations (there is ample evidence that these are positively correlated). Students take courses for a variety of reasons and come into a course with different levels of preparation and interest in the subject matter, and their comments are more usefully interpreted in light of such context.
6. Boilerplate commentary might be added to each Web page which talked about possible limitations to the information reported (e.g., the past not being a perfect predictor of the future, students not best able to judge if the course contains the most up-to-date material, or of its usefulness in later life). The reader might be educated about the meaning of the various statistics reported, and reminded that faculty contribute in many ways besides teaching to the purposes and reputation of the University and hence to the value of their degrees.
They also contribute to teaching in many ways besides the specific course being evaluated (e.g., through creation of new courses and development of teaching materials). Given the low cost and speed of the Web, it may be possible someday even to ask leading professors (possibly in collaboration with other universities) to evaluate syllabi, readings and exams for completeness and rigor, or ask alumni to report on the value specific courses and professors played in their lives and careers.
While these suggestions need to be discussed and debated, it is clear to me that the present mechanical system of teaching evaluation is seriously flawed. Students can play important roles not just in providing raw data but in aiding its dissemination and interpretation. They can help in collecting data from other sources with different perspectives and in educating their peers to improve interpretation in a world which is not black and white. Students are not the only source of information about teaching performance, they are merely the least costly and politically acceptable. Effort must be made to integrate different sources to assure validity of the process by overcoming deficiencies in any single source.
The suggestions above clearly require more effort and involvement and cost than does the present system. I dream of a day when teaching evaluation is deemed sufficiently important and deserving of professional student attention and involvement, perhaps as an ongoing activity like The Minnesota Daily.
Such effort is important not only because student decision-making could be aided but so would faculty and administrative. Students and faculty may even be brought closer together. Initiatives in this area have the power to bring much positive visibility to the University and contribute to the continued greatness of this institution.
Allan D. Shocker is a professor ofmarketing in the Carlson School of Management.