The Future of Teacher Evaluations in Higher Ed

by Staff Writers

With dozens of exams and papers to grade, next semester's courses to plan, and projects to wrap up, the end of the semester can be stressful enough for teachers without having to worry about handing out and getting back the results of student evaluations. While they may be taken less than seriously by students who hastily fill them out so they can get on to summer or winter vacation, these evaluations can actually play a big role in the success of professors in their academic careers, as the results are often used to help make decisions in who's promoted, who's given tenure, and even who's let go. With so much riding on these metrics, it makes sense that schools would want to get them right, but unfortunately, they very often don't.

Yet change may just be on the horizon. In both the K-12 sector and higher education, evaluations are seeing a lot of attention as schools, colleges, and universities work to ensure they're getting useful and meaningful data on their professors and the courses they offer. While revolutionary changes may not have reshaped the system just yet, at least not at every school, there are indications that the future of evaluations may be quite different from the type of system that exists today, giving hope to those who see the current one as outdated and unfair.

The Evaluation System

Student evaluations of teaching have a long history in higher education, with colleges using some form of the system going back as far as the 1920s. While the evaluation system might not be perfect, it does come from a good place, with the goal of making sure that courses and professors offer a high-quality educational experience for students.

Evaluations vary from institution to institution, but generally, student evaluations of teaching, or SETs, consist of a series of questions on topics like an instructor's communication skills, organization, enthusiasm, flexibility, knowledge of the subject, clarity, course difficulty, and fairness of grading. Students can give professors a numerical rating on each of these and in many cases can also provide feedback and comments on a separate form.

Since evaluations aren't public at many colleges and universities, it's hard to tell just how well professors rate in the eyes of their students as a whole, but studies have shown that students aren't always kind or especially fair in their ratings of professors. Luckily, student evaluations aren't the only ones that play a role in helping faculty improve and in making critical decisions on tenure and promotions: peer evaluation often carry more weight and can sometimes be more useful to professors looking to improve their professional competence.

That doesn't mean that student feedback doesn't have a place, however. Even Stanley Fish, a University of Illinois at Chicago professor emeritus infamous for throwing his evaluations in the trash each semester, admits they can be useful but only when well formulated and focused solely on measurable elements of pedagogy. This caveat, however, is at the heart of the issues that many have with student evaluations as they exist today.

The Problem With Evaluations

While the media may have given more attention to the battle over evaluation reform at the K-12 level, the same debate rages on in higher education as well. Serious criticisms have been made of the way professors are evaluated by students, a subject that has become all the more important as budget cuts have forced layoffs, the phasing out of tenure, and other issues that make academia more competitive than ever before.

It is this competition that may have helped spawned some of the negative trends happening in higher education today. "The main problem with evaluations is that they measure satisfaction, not learning," says Louisiana State University accounting professor D. Larry Crumbley. He worries that this kind of evaluation system, which doesn't exist in any other profession, is leading to the degradation of higher education as a whole.

"Professors believe that by being easy and giving higher grades they'll get higher evaluations," Crumbley says. "Since 1960, there has been a ton of grade inflation, and there has also been coursework deflation, as many professors make courses easier to compete with other professors who are also becoming easier and easier." In essence, the more weight student evaluations hold, the more willing professors are to cater to the desires of students.

Evaluations, Crumbley believes, should be based on administrator class visits, peer reviews, and even learning outcomes. He isn't alone. In 2006, the Spellings Commission called for higher education to develop readily comparable ways of measuring student learning. Colleges have been extremely reluctant to make these kinds of changes, despite studies showing that SET scores are more closely related to the grades instructors assign than to actual learning outcomes.

Evaluations have also been criticized because of their timing. "The biggest problem that I have seen with evaluations is that students do them at the very, very end of the course, often in five minutes or less. They really do not care because the course is over," says former nursing professor Carmen Kosicek. "If there are going to be changes made due to their input, it won't affect them unless they retake the course. It would be better, in my opinion, to offer evaluations of the class multiple times so that student input could positively affect the students giving it."

Course evaluations given at the end of a course not only make it impossible for teachers to change a course as they go, but many may not receive them back until it's already too late to make changes to their syllabus, textbook selection, or other factors for the next semester. It's not unheard of for evaluation systems to take months to return results to professors, making it nearly impossible to actually use feedback information to improve the quality of education in any kind of reasonable time frame.

The content of evaluations may also pose challenges to their usefulness in evaluating faculty. Numerical scales give students little chance to offer individualized feedback about their experiences in the course, and the questions themselves may offer little insight into elements that research has demonstrated actually best reflect teacher effectiveness. Laura Bowman, a professor at Walden University, thinks evaluations need to be more focused on these elements. "The best evaluation I ever saw was one in which all the learning outcomes were listed and students ranked how much they learned in each area."

Perhaps most troubling is that students don't always evaluate professors on their effectiveness alone. Studies have found that students often use factors like teacher attractiveness and personality and their own performance in the course to make their assessments. This can make it hard on faculty who don't always have avenues for recourse if students choose to be dishonest or focus on non-academic factors when filling out SETs.

Change on the Horizon

Michael Hansen, in a paper released earlier this year, called for widespread reform of teacher evaluations at the K-12 level. He thinks evaluation systems, like other technologies, will have to evolve to accommodate shifts in the modern classroom, especially with hybrid classrooms and online education becoming more and more common. In this case, what is true for K-12 education is also true for higher education. If evaluations are to remain meaningful, they're going to have to adapt to how educators need to use them today.

These changes are already starting to happen. Helping many schools implement them is the IDEA Center, a research-focused non-profit. In 1975, the IDEA Center developed the Student Ratings Instrument which is still the most widely used evaluation tool on the market.

While the SRI hasn't changed radically in the past few decades, the IDEA Center still works to be at the cutting edge of understanding what works and what doesn't in evaluations. They act as consultants to colleges to help them improve evaluation systems and are actively engaged in research on a variety of topics related to improving teaching, learning, and leadership- the results of which are released on their website and are free for colleges to use.

What are most colleges doing wrong? The organization's Senior Research Officer, Steve Benton, says that many colleges are putting too much emphasis on student feedback. "Student ratings are only one indicator of teaching effectiveness," he says. "Peer evaluations, student products, and innovative practices in the classroom should play a larger role in how faculty are evaluated."

The IDEA Center has discovered that individualized evaluations provide the best feedback on actual student learning outcomes. Using this system, each evaluation is tailored to a professor's personal teaching objectives and students determine how successfully a professor has met the goals he or she has set. When complete, this system generates a report that offers not only a quantitative analysis of teaching effectiveness but also tips and tools for improving.

While this system has proven effective, Benton believes there are still big changes on the horizon for evaluations. Data and analytics, which make it easier than ever for professors to track how much time students spend reading, studying, completing assignments, taking notes, what notes they take, and other critical information about student learning, may just play a critical role in the evaluation of the future.

Changes to evaluation systems don't always have to be drastic to be effective. At Miami University, the Center for the Enhancement of Learning, Teaching, and University Assessment is simply working to modernize and refine its existing system to better serve the needs of the university and its students. According to the center's director, Cecelia Shore, a big first step was to add a set of six university-wide questions to the evaluation form, allowing the school to look at common metrics across the university, which they think will help make the promotion and tenure system more fair.

When the new system goes live to all classes in the fall of 2013 (it's currently in pilot), professors and departments will also have the chance to add their own questions to evaluations, giving a more personal and hopefully more useful picture of factors that are important to particular areas of study. For example, those that rely of laboratory work, studio sessions, or other specific forms of pedagogy can ask questions related to these elements.

The biggest change, however, has been in making the evaluations digital. This gives students more time to fill them out and allows faculty to get instant feedback from students once they've submitted their final grades. However, a digital system does come with some drawbacks, Shore admits. Because students aren't forced to fill out evaluations in class, response rates have become a major concern, with an overall rate of just 60%-70%. Shore believes, however, that the benefits outweigh the negatives. "The system we have, once we get it going and get the kinks worked out, is going to be good in a number of ways. It gives immediate feedback, gives the promotion and tenure process a common platform, and leaves room for faculty to emphasize that aspects of the course and their teaching that are unique and special to them."

Yet some in higher education favor limiting the importance of student input altogether, relying instead on academic peers to evaluate teaching effectiveness. Bowman says that it's these experts in education that can offer professors the best feedback on ways they need to improve. "Professors who have education, training, and experience in teaching should be visiting colleagues' classes and completing evaluations," she says. "I have done this at several schools and it is a better way to learn where instructional practice can be improved upon." While many schools already have these types of peer review, in an era where universities are shifting to a more business-focused, consumer model, it's unlikely student feedback will ever go by the wayside.

What Educators Need to Know About Evaluations

As the year winds down, you may find yourself again facing the prospect of student evaluations, which, despite the often hasty way they're completed by students, can actually be meaningful to your career and how you craft your courses. If you want to improve the responses students have to your courses and the quality of what they have to say, it can be useful to make a few changes.

One of the critical aspects of getting better and often more useful evaluations is to be specific. Shore says this is one of the most common problems educators encounter with student evaluations, as students can often see questions quite differently than professors. The best method, she thinks, is to build questions that are both clear and specific, so that there isn't room for interpretation by students, something that's much more common than you might think.

A study by Carol Lauer in 2012 backs this up, with students having vastly different interpretations of the phrase "not organized" on an evaluation form. While 30% of faculty took this to mean not following or changing the syllabus, 17% of students though it meant not being prepared, 15% thought it meant not having a plan for the day, and 13% thought it meant student work was returned slowly.

Research also suggests that professors who are more animated (using hand gestures, modulating their voices, and walking while they talk) or who are more entertaining tend to get significantly better evaluations, no matter what they're actually saying. Students want to feel engaged and excited by the coursework; getting them there isn't always easy, but it is possible.

While you can't make changes this semester, there are things you can do in the future to help you craft better courses and get important feedback from students. Mary Clement in an article for Faculty Focus, advises a few changes that can make students feel more excited about and engaged in your course:

  • First, you can strive to make course material relevant to modern students. This can mean incorporating elements from pop culture and tech or just explaining why students need to know this today. This can motivate students and often leads to better course evaluations.
  • You should also be explicit about what students need to do to succeed in the course and how they'll be evaluated. Let them know what you want and how you expect them to perform in class. It can also help to make it easy for students to figure out their grades throughout the semester so they can judge whether they need to ask for help or work harder.
  • Finally, keep in mind that you don't have to (and probably shouldn't) wait until the end of the semester to get feedback. While the official course evaluations are the ones that count, you can get feedback throughout the semester on your own. Your students will feel valued and you may find ways to improve your courses earlier in the school year, giving you time to adjust and hopefully earning you better evaluations when it counts.

No one wants to earn the scathing, sometimes cruel comments that students can dole out, and most teachers want to be considered good at what they do. Getting there will take time, effort, and yes, maybe a little pandering to students' needs. In the end, however, both you and your students should benefit: you earn better evaluations and they get a better, more fulfilling educational experience.

The evaluation system that exists on most college campuses isn't perfect, or in some cases even desirable, but things are changing as schools look to get more from the data they collect and professors fight for the right to be evaluated fairly and on elements that truly matter. Like most things in higher education, change in evaluations is moving slowly, but educators can take solace in knowing that evaluations are, slowly but surely, getting better.