President's Message

Program Rankings—the Search for the Academic Assessment Holy Grail!

I was embroiled in a heated debate earlier this week about four-year program rankings. In short, ICHRIE Head Office was asked to provide an industry partner with a list of the World’s top 100 Hospitality Management Programs. Put simply, we all ran for cover as, in the opinion of most, no reliable ranking existed. With this in mind I thought I would take the opportunity to generate a little discussion about this issue; or, as I like to put it, the eternal search for the academic assessment Holy Grail! I begin with a question that I don’t really have an answer to; why is it that, after all these years of discussing academic quality and its influence on potential student and employer decision making, we don’t have a national or universally accepted measure of quality that can be utilized to rank two- and four-year programs within the United States and Globally?

From a brief review of the literature there have been many papers written on the theme, both within and beyond our discipline. Researchers have settled on a variety of indicators to rank programs, and there have been clear winners and losers depending upon your perspective. For example Gould and Bojanic (2002) explored the ranking of undergraduate hospitality programs as perceived by industry recruiters. They concluded that program rankings would be more beneficial if used as ratings, tailored more to “specific attributes of the program being rated” and “with other industry stakeholders (e.g. academics, hospitality executives, etc).” Another study by Severt, Tesone, Bottorf and Carpenter (2002) addressed this same issue by analyzing scholarly productivity of school faculty. This approach enabled them to come up with a World Ranking of the Top 100 Hospitality and Tourism Programs. A later paper by Assantea, Huffman and Harp (2007) addressing the “Conceptualization of Quality Indicators for U.S. Based Four-Year Undergraduate Hospitality Management Programs” supports the earlier conclusion of Gould and Bojanic that there may be other stakeholder groups and/or quality indicators that need to be considered here – namely students/alumni and industry and a host of program specific indicators. Their work points to the following five conceptual themes: students and alumni, curriculum, faculty, industry, and facilities.

From a broader search of the literature it appears that this theme has been explored to a greater extent across the wider world of higher education. Upon review there is much that we can learn if we are to ever develop a truly objective and comprehensive measure of program quality that can be used to rank and compare programs globally. For example, O’Neil, Bensimon, Diamond and Moore (1999) in their paper “Designing and Implementing an Academic Scorecard” speak to the need for “designing metrics that are simple, practical and conducive to organizational learning” - informed decision making if you will. They propose a series of metrics that address the quality of students, faculty, programs and the nature and efficiency of school operations. Dill and Soo (2005) take this a step further. In reporting on a UNESCO/CEPES conference on higher education indicators, the authors spoke to the need for an “emerging international consensus on the measurement of academic quality” and God forbid, the “need for an appropriate role for public policy in the development of university ranking systems.” The authors go on to report on a cross-national analysis of university ranking systems that included Australia, Canada, the United Kingdom and United States. While their paper points to the importance of research productivity at the institutional level, they also highlight the importance of faculty terminal degrees, faculty/student ratios, student entry data, student diversity, on-time graduation rates, teaching quality, job placement and ultimately student/alumni satisfaction.

As academics we are all aware of the importance of academic quality and how this drives enrolment, development success, program reputation and student placement. Why then can’t we lock our collective heads together and come up with an approach to program assessment and ranking that serves us all well? For too long now we have sat in our ivory towers and hallowed halls and offered 1001 excuses as to why such a system will not work. Given the fiscal challenges we all now face, surely it’s time to slay this dragon and develop and agree upon objective quality indicators that can produce informed, impartial and reliable program rankings that can be globally accepted by all? Unlike Dill and Soo I am not advocating government involvement in this endeavor; rather I believe the academy is best placed to take action here and in the interests of impartiality, perhaps ICHRIE should take a lead role?

To this point we have dabbled with accreditation (ACPHA), but for a variety of reasons not all schools have chosen to engage. We have also talked about the possibility of developing a national exit exam for graduating seniors of two- and four-year programs. That said, no real action has been taken and no serious or acceptable system has been put forward. As the above review has shown this is not rocket science. There are a range of objective and acceptable indicators that can be used. To those already listed I would add job placement and industry perceptions, on time graduation, graduation rate and exit exam performance. Surely there is a way that we can develop an approach based upon such indicators to more comprehensively and reliably speak to this issue? In closing I invite your input on same.