1 Higher Education Research & Development Vol. 26, No. 1, March 2007, pp University mission and identity for a post post-public era Simon Marginson* University of Melbourne, Australia CHER_A_ sgm / Higher Original 2007 Taylor February SimonMarginson and & Education: Article Francis (print)/ Research Ltd &(online) Development The paper reflects on the implications of two influential albeit contrary movements affecting research universities in Australia (and many other nations): global rankings, which normalize the comprehensive science-based research university; and the policy emphasis on diversification. It critiques the global rankings developed by the Times Higher Education Supplement and the Shanghai Jiao Tong University, which tend to shift control over the definition and purpose of university activity to the rankings agencies themselves. Taken together, rankings and diversity pose for university executives and governing bodies the question of university mission and strategy. It is knowledge creation that distinguishes universities. To go to the root of mission and strategy is to revisit the academic fundamentals, focusing on the mixed public/private character of the core academic products of universities and on the internal ethical regimes essential to sustain free scholarship and research. Introduction This paper focuses on two transformations that are remaking research universities 1 in Australia and in many other countries. 2 The two sets of changes are in some tension with each other but also generate each other. The first transformation has been set in train by global research rankings, which by defining the terms of global competition are pushing universities into the same mould and order them more firmly in a single hierarchy. Rankings in the form of league tables are posed as the ultimate performance indicator. They have attracted much government and public attention and transfixed the higher education sector itself (Usher & Savino, 2006). The second transformation, which is less developed, is the gathering momentum in favour of specialisation of mission and identity. Specialization or diversity has received a growing emphasis in federal policy (Bishop, 2006) and is manifest particularly in the University of Melbourne s (2006) strategic shift to graduate *Centre for the Study of Higher Education, University of Melbourne, Victoria 3010, Australia. ISSN (print)/issn (online)/07/ HERDSA DOI: /
2 118 S. Marginson professional programs, which has international antecedents but is novel and distinctive in the Australian context. The focus on specialization works in almost the opposite manner to university rankings. It emphasizes horizontal forms of difference rather than vertical forms of difference; and foregrounds internally generated strategy rather than determination from outside the institutions. At best specialization calls on universities to be themselves and allows them to be different, posing matters of mission and identity that are central to the responsibilities of university executives and governing bodies. Each of these two transformations pushes universities to extremes that compel them to turn towards the other. University rankings and university specialization function almost as necessary opposites, patterning the sector with a post-public mixture of determination and freedom. Rankings generate the desire to defy the positioning power of the league table by shaping a distinctive strategy and new terms of comparison. Yet diversity is difficult to imagine, originality is risky in the attempt and rankings provide a more secure (albeit path-dependent) guide to action. At the same time both forms of transformation are consistent with the shift in responsibility from government to institution that has been the main trend in policy for the past two decades. In this abstention of government key issues are being obscured, particularly the public resources that universities need to be globally competitive and thereby move up the rankings; or to invest in building their own missions on a longterm basis. Nevertheless, in an age of self-regulating persons and institutions, it is inevitable that mission and identity will be largely locally determined, even while the resource conditions enabling mission continue to be partly public, particularly in basic academically directed research. We are moving into a post post-public era in which the momentum towards deregulation and corporatization will be balanced by a renewed concern about public purpose and conditions, often with universities themselves defining the public interest. This era will be framed by markers of market competition, such as university rankings, and also by selfregulating specialist missions that are publicly responsible and accountable. Research universities cannot evade the challenges posed by global rankings and specialization, but they can interpret and adapt those trends in such a manner as to enhance their control over their own destiny. Here the challenge is to take the question of mission and identity beneath the level of new branding or marketing strategy, to the core academic activities wherein lies the distinctive character of universities and the potential for fundamental improvements. The paper provides a critique of the two main systems of global university rankings developed by the Times Higher Education Supplement and the Shanghai Jiao Tong University and reviews their effects. It then reviews the question of mission and identity, arguing that what makes universities distinctive is their products, and particularly those products derived from their role in relation to knowledge. This has a number of strategic implications for university executives and governing bodies, particularly the mixed public and private character of the products, and the internal ethical regimes that are necessary to enable free knowledge creation.
3 Ranking and research universities 119 Global university rankings Rankings by the Times Higher Education Supplement In October 2006 the Times Higher Education Supplement released its annual World university rankings for the third time (THES, 2006). Australian universities again outperformed those of every other nation except the USA and the UK, with seven universities in the top 100: Australian National University (ANU), Melbourne, Sydney, Monash, New South Wales, Queensland and Macquarie. There were six more in the top 200: Adelaide, Western Australia, RMIT, Curtin, Queensland UT and Wollongong. On the other hand, of these 13 Australian universities, all but three (ANU, Sydney and Queensland) saw their position fall from that achieved in the 2005 league table. For example, Monash dropped from 33 to 38, Adelaide from 80 to 105, WA from 80 to 111, RMIT from 82 to 146, Curtin from 101 to 156 and QUT from 118 to 192. La Trobe, Newcastle, South Australia and Tasmania were shifted out of the THES top 200 altogether (see Table 1). Meanwhile the British universities improved their position. In the reputational survey that makes up 40% of the THES index, based on academic peer review, Cambridge and Oxford passed Harvard, though Harvard is much richer and its professors are cited at more then three times the rate of those from Oxbridge. Table 1. Australian universities in the Times Higher Education Supplement ranking of the top 200, THES ranking University Australian National University (ANU) University of Melbourne University of Sydney 40 =38 =35 Monash University University of New South Wales University of Queensland Macquarie University =82 University of Adelaide 56 =80 =105 University of Western Australia 96 =80 =111 Royal Melbourne Institute of Technology Curtin University of Technology 76 =101 =156 Queensland University of Technology 118 =192 University of Wollongong 196 La Trobe University University of Newcastle =127 University of South Australia =154 University of Tasmania 161 =166 Note: indicates ranking position outside the top 200. Source: THES, 2006 and predecessors.
4 120 S. Marginson How did this all happen? Why is the THES index so volatile? Do universities change this much from year to year? How did the University of Western Australia and RMIT come to perform so much worse in 2006; and how did Cambridge and Oxford suddenly improve their peer reputation? Why do Australian universities continue to do relatively well despite the deteriorating state of public funding of higher education in Australia, as noted in Education at a glance by the Organization for Economic Cooperation and Development (OECD, 2005)? The answers lie in the criteria used to determine the THES rankings. The THES places a high value on institutional reputation as such, and on the level of internationalization. Its rankings appear to have been designed to service the market in cross-border degrees in which UK universities (like Australian universities) are highly active. As well as the 40% comprised by the reputational survey of academics, another 10% is determined by a survey of global employers. The reputational surveys are open to variation year by year and this generates most of the volatility of the index. There are two internationalization indicators: the proportion of students who are international (5%) and the proportion of staff (5%). Another 20% is determined by the student-staff ratio, a quantity measure that is treated as a proxy for teaching quality. The remaining 20% is comprised by research citation performance. Problems of the THES methodology Methodologically, the THES index is open to criticism. It is not specified who is surveyed or what questions are asked. The student internationalization indicator rewards volume building, not the quality of student demand or programs. Teaching quality cannot be adequately assessed using student staff ratios. Research plays a lesser role in the index: the THES rankings reward a university s marketing division better than its researchers. More seriously, the THES index is open to manipulation. By changing the recipients of the two surveys, or even the way the results are dealt with, the results can be shifted. For example, in 2006 the results of the three annual surveys of academic peers, with partly different recipients, were combined in a nontransparent manner. Along with the internationalization indicator, the role of the surveys explains the relatively strong performances by universities in the UK and Australia. The top Australian universities do extraordinarily well here. In the peer review survey in 2006 ANU is ranked at seventh in the world, ahead of Yale, Columbia, Chicago, Caltech, the whole of Western Europe, the much bigger research universities at Toronto and Tokyo, and Imperial College London, which is third in the UK after Cambridge and Oxford. Melbourne is ranked at eighth in the world on reputation alone, equal with Yale. For those who have some sense of the respective resources, activities and power in the higher education world of all of these universities, the lofty placement of the Australian institutions does not make sense. No doubt, ANU and Melbourne are good but they are not that good. Another way of assessing the THES inflation of Australia s performance is to compare the results for Australia with the results for Canada. Canada has a similar higher education system to Australia, in many respects, but is 50% larger and has
5 Ranking and research universities 121 better funding overall, including much stronger public funding; and both higher participation rates and stronger research performance. The only area where Australia is clearly better than Canada is recruiting high volumes of international students. But Canada has not sought to do this. Unlike Australia, it has not cut back public funding to drive entrepreneurship, so that it is stronger in basic research and subsidizes higher participation rates. Yet Canada has just three universities in the THES top 100. Australia has seven and its leading institutions are ahead of those from Canada. Perhaps a calculation that at bottom was designed to strengthen British universities in the international market, vis-à-vis the USA and Europe, inadvertently picked up the Australian universities as well. Or perhaps the THES always intended to boost the Australian universities. Despite some decline in the Australian position, the outcomes of the THES rankings continue to provide useful data for Australian university marketing departments. But executive leaders and governing bodies in Australia believe the rankings of their universities at their own peril. There is a danger that the THES inflation of the performance of Australian universities will induce complacency. It would also be unwise to believe these rankings for another reason. They are not universally regarded as credible. In relation to Australia, there are sources of information other than the THES rankings, and the messages from those other sources of information can be different. In relation to universities from Australia and many other countries, the sudden annual changes in the THES position suggest that there is no necessary relationship between changes in real institutional performance and changes in the ranking. These sudden changes of position can create substantial difficulties and even injustices for institutions. In 2004 the oldest public university in Malaysia, the University of Malaya, was ranked by the THES at number 89 in the world. The newspapers in Kuala Lumpur celebrated. The university s Vice-Chancellor ordered large banners declaring his achievement UM a world s top 100 university placed on the edge of the campus facing the main freeway out of town where they were visible to every international visitor arriving in and leaving the country. But in 2005 the THES changed the definition of domestic Chinese and Indian students at the university from international to national. The results of the reputational surveys also changed. The University of Malaya dropped 80 places to number 169, though the university itself had scarcely altered from The Vice-Chancellor was pilloried in the Malaysian media. When his position came up for renewal by the government in March 2006, he was replaced. 3 This might make a good news story. But it is not conducive to good policy, leadership and management. If a sharp decline in the THES ranking can occur without a necessary decline in real performance, likewise there is no reason to assume that genuine improvement will be rewarded in the rankings. This is not a competition that creates positive incentives. It looks more like a random process or worse, like a rigged game. It is hard to see how the Times exercise adds value either for governing bodies and institutional managers within higher education, or for external parties such as prospective students and their parents, governments, industry and the professions that need accurate data about the higher education sector.
6 122 S. Marginson The Shanghai Jiao Tong University rankings The other principal system of global rankings in provided by the Shanghai Jiao Tong University Institute of Higher Education (SJTUIHE, 2006). It is very different. There are no reputational surveys or proxy teaching indicators. The index is entirely derived from different aspects of research performance. The major part of the index is determined by publication and citation in the sciences, social sciences and humanities: 20% citation in leading journals; 20% articles in Science and Nature; and 20% the number of Thomson/ISI HiCi researchers on the basis of citation (ISI, 2006). Another 30% is determined by the winners of Nobel Prizes in the sciences and economics and Fields Medals in mathematics, in relation to their training (10%) and their current employment (20%). The remaining 10% is determined by dividing the total derived from the above data by the number of staff. At the elite level, Shanghai research performance is dominated by the Englishspeaking nations, with 71% of the world s top 100 research universities, and particularly by the United States, which has 17 of the top 20 in Australia has just two universities in the Jiao Tong top 100, compared to the seven in the THES table: ANU at 54 and Melbourne at 78. Sydney, Queensland and Western Australia are in the Shanghai top 150; the University of Western Australia benefited from the award of the 2005 Nobel Prize to Medicine to one of its research professors. The University of NSW is in the top 200; Macquarie, Monash and Adelaide in the top 300; Latrobe and Newcastle are in the top 400; and Flinders, James Cook, Murdoch, Tasmania and New England are in the top 500 research universities. The other 24 universities miss out altogether. The Shanghai index is more soundly based than that of the THES, in that it measures only real outputs and not reputation, its methods are transparent and it creates a positive relationship between improved research performance relative to others and a higher ranking. The Nobel Prize criterion is more controversial than the other constituents of the index. These prizes are submission based, scientific merit is not the only determining factor. There is potential for politicking to enter decisions. More generally, the SJTU rankings favour universities large and comprehensive enough to amass strong research performance over a broad range of fields, while carrying few research-inactive staff. They also favour universities very strong in the sciences, universities from English language nations because English is the language of research (non-english language work is published less and cited less) and universities from the US system, as Americans tend to cite Americans. A massive 3614 of the Thomson/ISI HiCi researchers are in the USA. This compares with 224 in Germany, 221 in Japan, 162 in Canada, 138 in France, 101 in Australia, 94 in Switzerland, 55 in Sweden, 20 in China and none in Indonesia. Harvard and its affiliated institutes have 168 HiCi researchers, more than the whole of France or Canada. Stanford has 132 HiCi researchers, more than all the Swiss or Australian universities together; UC Berkeley 82 and MIT 74. There are 42 at the University of Cambridge in the UK. Of the Australian institutions, ANU is the leader with 25 HiCi researchers, followed by Melbourne with 10 (ISI, 2006), while WA has seven, Sydney and New
7 Ranking and research universities 123 South six, Macquarie and Newcastle three. Competition around the world for current HiCi researchers and for the likely next generation has now been sharply intensified because of the Shanghai index. Shanghai Jiao Tong provides a better basis for global competition than does the THES. But the Shanghai outcomes are confined to research and associated publication. The Newsweek rankings these are not rankings in their own right, but a scissors-and-tape combination of part of the Shanghai rankings, part of the Times ranking, with the addition of data on library holdings also are largely grounded in research and publication (Newsweek, ). Global rankings and diversity These global rankings systems, both those of the THES and those of the Shanghai Jiao Tong, have accumulated significant authority in worldwide higher education in a short time, particularly the annual outcomes of the Jiao Tong rankings. But it is important to consider the logic and effect of the rankings, particularly in relation to diversity and to university control over mission and identity. First, rankings tend to work against diversity, specialist mission and strategies of innovation, except within narrow bounds. These two rankings systems tend to norm world higher education as a single global market of essentially similar comprehensive research universities able to be arranged in a league table for comparative purposes. Knowledge flows on a global basis and increasingly so do students and staff. Universities share a common global network constituted by relations that are both cooperative and competitive. The question at issue is the terms on which that global competition is conducted. The Shanghai and THES rankings both tend to boost the position of the comprehensive research university vis-à-vis other models of higher education (though the reputation indicator allows the THES to include whatever heterogeneity as it sees fit). The THES also favours universities with many international students. All other kinds of institution are disadvantaged. Both rankings systems tend to downgrade smaller specialists and institutions whose work is primarily local or vocational. This means they downgrade the high quality technical institutes in Finland and Switzerland, the German technical universities and many Australian institutions. There is no reason to assume that intensified competition in itself will generate greater specialization unless the terms of contest and incentives point in that direction. In the absence of policy moves to shore up diversity by other means, a focus on global research rankings triggers the evolution of national systems that are both more unitary and also steeper in their vertical differentiation. On the national scale it is possible to interpolate compensating policy moves to secure greater diversity. However, no such policy moves can be made on the global scale, because outside the framework of the Bologna process in Europe there is no policy space in which such moves can be executed. Multilateral negotiation in higher education is scarcely developed yet, except within Europe. This means that unless the global rankings systems are changed, they inexorably push higher education worldwide towards a one size fits
8 124 S. Marginson all approach in which comprehensive research-intensive science universities will be dominant. And national policy moves in one country cannot fully overcome the homogenizing effects of global rankings (except in the USA where the nation itself is globally dominant). Second, rankings become an end in themselves without regard to exactly what they measure or whether they contribute to institutional and system improvement. The data-gathering process covers only a small faction of university activities. So league tables are only ever partial in their coverage and it is highly simplistic to treat them as summative, let alone treat the results as representative across all possible categories of university activity. But league tables are normally treated as summative. The desire for rank ordering overrules all else. Often institutions are rank ordered even where differences in the data are not statistically significant. Rankings generate and recycle reputation, mostly protecting established reputations. In the process, rankings divert our attention from many of the central purposes of higher education. No ranking or quality assessment system has been able to generate comparative data based on measures of the value added during the educational process, and few comparisons focus on teaching and learning at all, though such data would be useful for prospective students. Instead, there are various proxies for teaching quality such as quantity resource indicators, student selectivity and research performance; but empirical research suggests that the correlation between research productivity and undergraduate instruction is very small and teaching and research appear to be more or less independent activities (Dill & Soo, 2005, p. 507). Nor is there any necessary correlation between inherited reputation and teaching quality. Reputation-based rankings imply that students only concern is the status of their degrees, not what they learn. Reputation-based rankings favour universities already well known regardless of merit. One study of ranking found that, in the case of a particular survey, one-third of those who had responded to the survey knew almost nothing about the institutions concerned, except for their own (which tended to figure rather well). In the absence of soundly based knowledge, well-known university brands generate halo effects. The classical example is the American survey of students that found Princeton law school was ranked in the top ten law schools in the country. But Princeton did not have a law school (Frank & Cook, 1995, p. 149). Global rankings and university mission These effects of rankings must be of concern, and particularly the consequences for diversity. But the logic of rankings cuts still deeper into the idea of a university than suggested so far. When rankings are not challenged and interpreted, they start to determine mission and identity. Externally generated rankings, rather than universities themselves, come to shape the purposes, outputs and values of higher education and define it to, and for, the world at large. Each ranking system norms particular notions of higher education and its benefits. In the Jiao Tong universe, higher education is scientific research. It is not teaching or community building or solutions to local or global problems. In the THES universe, higher education is
9 Ranking and research universities 125 primarily about reputation for its own sake, and international marketing. It is not about teaching and only marginally about research. To accept these ranking systems as they are is to accept holus bolus these definitions of higher education and its purposes. At the same time rankings feed the illusion that higher education, in Australia and across the world, is a level playing-field, in which universities are like individual business firms that stand alone, their position solely determined by their singular mission, strategy and effort. Universities now have more scope than ever for strategy making, especially global strategy, and a better capacity to pilot their own course. But the analogy with business firms has limits. Universities lack shareholders and have a broader set of constituents than business firms. They have multiple bottom lines, they are looser coalitions of activity and they are less flexible than business firms in methods of production, and their core outputs are also more fixed. And they do not stand alone on a level playing-field. For better or worse, universities are tied to their history, to their local context, to national resourcing (especially in relation to basic research) and to the capacity of their communities to provide financial and in-kind support. They are closely affected by the level of government, philanthropic and tuition revenues available to them. It is difficult indeed to become Harvard without Harvard s federal research grants and endowments, not to mention the military, economic and technological knowedge-power of the United States. Thus the normative power of league tables, with their illusion that this is a competition that anyone can win, threatens to rob universities of control over their own identity even while imposing on them expectations that few of them can expect to meet. Strategic response to rankings How might university executives and governing bodies respond to the gravitational pull of global university rankings on strategies and priorities, and the potential displacement of control over mission and identity? One move is to critically examine the assumptions at the base of rankings as this paper has done. But by itself the technical critique has limited impact. Rankings in the form of league tables will not disappear, even if they are exposed as nonsense; and the Shanghai Jiao Tong rankings are not nonsense, but contain informative data. A more fundamental strategy in response to rankings is to regain and re-ground the identity and mission of institutions, both within each individual institution and jointly across the sector. Other developments, longer in the making than global rankings, point to the same strategic conclusion. Australian higher education has seen two decades of transfer from public funding to mixed funding and the narrowing of government policy objectives in the sector; the corporatization and self-management of institutions and the half-resolved tensions between academics and managers; a massively expanded set of functions, sites and activities; more media attention and a more potent set of accountability requirements; and the streamlining of governing bodies and the assumption of a more prudential and supervisory role. Australian universities have become overly focused on revenues and market share as ends in
10 126 S. Marginson themselves, though these are only means to the real ends, which are the benefits that they produce. This, again, suggests that a renewed emphasis on identity, mission and product would be timely. In sum, universities will not secure better public support in the form of public investment by raising more of their own money or becoming more efficient, or more transparently accountable. These objectives are rightly expected of universities but any social organization can do the same. Arguably, it only is the conduct of activities that are unique to universities and enable their distinctive social contributions that can create a strong rationale for public support. If Australian research universities are to trigger a public thinking about higher education different to the banal league table view of the sector, this will be based on what they do and why they do it. Identity and mission What makes universities socially distinctive is that they are self-reproducing, knowledge-forming organizations. They are defined by the binary between the known and the unknown. No other social or economic institution is defined primarily by this binary, although a growing number take it into their operation. This is also where universities are different to other educational institutions, such as schools. This binary between the known and the unknown lies at the heart of the roles of universities in research and critical scholarship, teaching and professional training inflected by research and scholarship, and knowledge transfer in all forms. The knowledgeforming functions of universities also impart to them a critical spirit, an incessant modernism and a rather secular temper, qualities that can bother conservative governments. It also explains why universities tend to be more global in temper and linkages than most of society. Knowledge slips readily across borders and forms in worldwide networks. The central importance of knowledge has two main implications for the inner life of universities. This inner life is not simply something that university executives and governing bodies must tolerate as a kind of background noise. The inner life of universities is the conditions which make the work of universities possible. It is a key responsibility of executive leadership to guarantee these conditions and a key responsibility of the governing body to superintend the provision and the guarantee of these conditions. First, because knowledge is predominantly a public good, universities necessarily find themselves producing a mix of public and private goods (Marginson, 2007). They produce important private goods through both teaching and research, but they never become solely commodity producers, or they cease to be universities. There are an ever-growing number of commercial applications of knowledge, and universities themselves capture some of these. Nevertheless, all over the world, the great bulk of the new knowledge produced by universities enters the free or subsidized public domain, through teaching and especially through research publications. This is because, as is often remarked, knowledge is a classic public good in the economic sense (Stiglitz, 1999). Public goods (including services) are goods
11 Ranking and research universities 127 that are non-rivalrous and non-excludable. Goods are non-rivalrous when they can be consumed by any number of people without being depleted, for example, knowledge of a mathematical theorem. Goods are non-excludable when the benefits cannot be confined to individual buyers, for example, law and order or social tolerance. Goods with neither quality are classified as private goods. Knowledge, especially basic research, is an almost pure public good, more clearly so than teaching which has mixed qualities. Public and part-public goods tend to be underprovided in economic markets. Yet such goods are central to the workings of advanced economies, societies and polities. Thus an immense array of information and knowledge generated in higher education, notably the outcomes of basic research, is accessible and subject to nominal charges well below its use value and below its costs of production. This means that regardless of the extent to which some of their activities are commercialized, and a $9.8 billion export industry in Australia (AFR, 2006) is not a small business, universities will continue to be partly dependent on public and philanthropic private sources of funds, and partly judged according to their success in fulfilling the broader collective good. Advancing the broader collective good remains at the core of what universities do and is crucial to the public support that they receive. Though universities are often imagined as if they are stand-alone firms whose bottom line is just themselves, there is always more to them than that. Second, the knowledge-forming role of universities has implications for their internal life. Because of (and despite) this definition, universities have multiple and diverse activities, connections, obligations and stakeholders. Under conditions of organizational freedom, their personnel pursue a broad range of objectives and agendas, with varying values and ethical regimes, particularly broad when all of the academic disciplines and professions are taken into account. Universities must support (and ideally, encourage) this plurality, except that they must exclude values and ethical regimes that would undermine or are otherwise inconsistent with universities as knowledge-forming organizations. However, the values practised by individuals, or by units for teaching or research or institutional marketing, sometimes mutually contradictory, do not necessarily embody the values of the institution qua institution. Only a small number of purposes and ethical regimes are common across the whole institution. These are purposes and ethical regimes that sustain universities as self-reproducing knowledge-forming organizations. Broader agreement is not just impossible, it is undesirable. If a university sought to be a community of the good in which all staff and students were committed to an all-embracing universal set of values spanning the full range of human activities, internal plurality, discussion and debate would be constrained. The university would be inhibited in the pursuit of edgy, critical innovative thought. It would be unable to fully function as a knowledge-forming organization. Or alternatively, amid the tensions engendered by different and competing claims about what constitutes the good, the university would fracture and fly apart. It would no longer be selfreproducing.
12 128 S. Marginson Two ethical domains essential to the university These common purposes and ethical regimes, the meta-institutional purposes and ethical regimes, constitute the contemporary idea of a university. Here the relevant ethical regimes can be reduced to two domains that are consistent with and productive of the distinctive social character of universities: The domain of communicative association: this notion owes something to longstanding notions of liberal conduct and civil behaviour and something to the insights of Jurgen Habermas into communicability and the public sphere. This domain includes the right to speak, and the conduct of dialogue on the basis of honesty and of mutual respect; and intra-institutional and inter-institutional relationships grounded in justice, solidarity, compassion, cosmopolitan tolerance and empathy for the other; The domain of secular intellectual practices: this domain includes support for, and freedom for and of, the practices integral to productive intellectual activity, including curiosity, inquiry, observation, reasoning, explanation, criticizing and imagining. The domain of communicative association provides conditions necessary for the domain of secular intellectual practices. Arguably, it is in this second domain, in which new knowledge is formed, that the ultimate essence of the contemporary idea of a university is found. In forming knowledge, scholars and researchers remember what they know, and they think of something new. Then they each systematize that something new. This something new, the thing that scholars and researchers seek, emerges in a zone vectored by criticism and imagining. In the absence of this zone, universities lose their driving force and their ultimate modern rationale. That zone is by no means in a healthy state in every Australian institution. Fostering that zone has always been is an important responsibility of executive leaders at both university-wide and disciplinary level. Correspondingly, overseeing and monitoring the conduct of such fostering ought to be seen as an important responsibility of governing bodies. Arguably, within universities the core organizational objective should be to protect and enhance the domain of intellectual practices, located as it is in the different fields of inquiry. From the broader viewpoint of policy, this suggests that research universities will maximize their social, economic and cultural contribution to the extent that human association within and between them is free, open and inclusive, and able to accommodate difference not just on the national scale, but also on the worldwide scale; and also the extent to which academic practices, are free, independent and robust. For example, numerous examples show that commercially controlled research, to the extent that it constrains the flow of knowledge or distorts the truth, inhibits universities. The former Harvard President, Derek Bok, devoted part of his book on Universities in the marketplace (2003) to demonstrating this. This does not mean that there should be no commercial funding of research, or no hiring out of university facilities to commercial interests (and it certainly doesn t mean there should be no commercially relevant research: much of research is in that category). But
13 Ranking and research universities 129 commercially controlled activities should be firmly separated from the mainstream research terrain governed by academic freedom. Conclusion Research universities are complex institutions with many stakeholders and outputs. Fortunately, there are certain things that only universities can do. This paper has argued that in the face of the challenge of global university rankings to university control over mission and identity, and the challenge of policy-makers and market pressures to diversify from one another by establishing a distinctive trajectory and oeuvre, the strategic response should be to focus sharply on the core purposes and activities distinctive to research universities as organizations. At the same time, it is important not to lose sight of the fact that fulfilment of these core academic purposes depends on certain enabling conditions. Up to now the discussion of conditions has focused mostly on resources, efficiencies and accountabilities. The argument of this paper is that it is also necessary to consider the ethical settings in which institutions operate, particularly the conditions sustaining communicative association and intellectual freedom, because those conditions are essential to core mission and identity. It is in the combination of identity and mission with enabling conditions that answers to the question about institutional specialization and strategy are found. Specialization, in essence, is constituted by one or another mix and quality of products teaching, research and scholarship and knowledge transfer. Specialization does not lie in the manner in which the organizational hierarchy is arranged, or in the decision to mount a commercial company or enter a new market or to launch a new venture providing often peripheral services. It goes to fundamentals such as the coursework programs, the methods of teaching and the modes and foci of scholarship and research. That is why (to make one specific reference) the Melbourne model is important. Not because it changes the program structures, these are only a means to an end. The Melbourne model changes what is taught and the formative effects of learning at every level, undergraduate, graduate professional programs and doctoral programs. In relation to university rankings, the foregoing analysis suggests that it is in the interests of research universities jointly and severally to engage in debate about the development of better rankings systems, rather than simply accepting the present systems by celebrating success and ignoring failure, or, what is worse from a tactical viewpoint, complaining about the methods of ranking only when the outcome is unfavourable. It is particularly crucial that universities successful in the rankings forgo the marketing benefits and become engaged in strategies for improvement. What should be the objective of such strategies? Arguably, what is needed is clean rankings that are transparent, free of self-interest and methodologically solid. As far as possible, the focus should be shifted from holistic institution-wide reputational rankings and towards university comparisons that provide more extensive and more valid data about the core academic activities themselves. League table positions of whole
14 130 S. Marginson institutions do not actually tell students, government and the public much about what universities do. This suggests that it would be better to rank institutions on their various functions taken separately: the different aspects of research and teaching, the different disciplines, locations and service functions. Ranking methods should generate information relevant for different stakeholders and provide data and information that are internationally accessible and comparative. Because quality is in the eye of the beholder, users should be able to interrogate the data on institutional performance using their own chosen criteria. In terms of ownership, it is important that institutions are involved and committed to maximum openness. Institutions operating on a broad basis (preferably not just nationally, but internationally) should establish an independent agent to collect, process and analyse data. A system of rankings that meets these requirements is one developed by the Centre for Higher Education Development (CHE, 2006) in Germany. This system includes data on all higher education institutions in Germany; The Netherlands and Belgium (Flanders) are preparing to join, and some Nordic institutions are showing interest. The chief virtue of the CHE rankings is that they dispense with holistic rank ordering and provide a range of data on specific services. The CHE data, which are drawn from student and academic surveys that are updated regularly, are provided via an interactive web-enabled database that permits each student or member of the public to examine and rank identified programs or services, based on their chosen criteria and their chosen weighting between the criteria. I have used it and it works. The Shanghai Jiao Tong research rankings provide rigorous, independent, useful data on research performance while the THES rankings do not provide useful data and should be abandoned. The Jiao Tong research rankings are only useful if their interpretation is limited to their role in relation to research. The message that this is not a general university ranking must be emphasized if it is to cut through the public debate. The Jiao Tong research rankings would be less open to this misunderstanding if they were confined to rankings within each of the separate disciplines, and if the institution-wide composite rankings were no longer published. They would also be more useful if the Nobel indicators were taken out, as these data are less sound than the other metrics. Acknowledgement This paper was first delivered as a keynote address to 6th Annual National Conference on University Governance, Old Parliament House, Canberra, Australia, October 10 11, Thank you to Deryck Schreuder who suggested the themes, to Sharon Parry who encouraged the author to think it might be worth publishing, and to the two anonymous reviewers. Notes 1. The focus is on research universities because it is among the research universities that questions of global rankings and institutional trajectory are posed most acutely at present. There is
15 Ranking and research universities 131 another discussion to be had which focuses on the consequences of rankings for different kinds of university. The diversity of the Australian system is discussed elsewhere see Marginson & Considine (2000) and Marginson (2006). 2. The discussion here is centred on Australia, but the issues receive global treatment in Marginson and Van Der Wende (forthcoming). 3. This anecdote is derived from as yet unpublished research on the global perspectives and strategies of the University of Malaya that was conducted in February The story has been confirmed from several sources. References Australian Financial Review. (2006). Education a huge, and growing, economic boon. Australian Financial Review, 9 October, p. 1 Bishop, the Hon. J., Commonwealth Minister for Education, Training and Science. (2006). Speech to Curtin Institute Public Forum. Retrieved October 14, 2006, from Ministers/Media/Bishop/2006/07/B asp Bok. D. (2003). Universities in the marketplace: The commercialization of higher education. Princeton, NJ: Princeton University Press. Centre for Higher Education Development (CHE). (2006). Study and research in Germany. University rankings, published in association with Die Zeit. Retrieved October 8, 2006, from Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university rankings. Higher Education, 49, Frank, R., & Cook, P. (1995). The winner-take-all society. New York: The Free Press. Institute for Scientific Information, Thomson-ISI. (2006). Data on highly cited researchers: SIHighlyCited.com. Retrieved April 10, 2006, from Marginson, S. (2006). Dynamics of national and global competition in higher education. Higher Education, 52, Marginson, S. (2007). The public/private division in higher education: A global revision. Higher Education. 53, Marginson, S., & Considine, M. (2000). The enterprise university: Power, governance and reinvention in Australia. Cambridge: Cambridge University Press. Marginson, S., & Van Der Wende, M. (forthcoming). Higher education and globalization. Paris: OECD. Newsweek. (2006). The world s most global universities, Newsweek, August, pp Organization for Economic Cooperation and Development (OECD). (2005). Education at a Glance. Paris: OECD. Shanghai Jiao Tong University Institute of Higher Education (SJTUIHE). (2006). Academic ranking of world universities. Retrieved September 6, 2006, from Stiglitz, J. (1999). Knowledge as a global public good. In I. Kaul, I. Grunberg & M. Stern (Eds.), Global public goods: International cooperation in the 21st century. New York: Oxford University Press, Times Higher Education Supplement (THES). (2006). World university rankings. Times Higher Education Supplement. Retrieved November 30, 2006, from University of Melbourne. (2006). The Melbourne model. Retrieved October 8, 2006, from melbournemodel.unimelb.edu.au/ Usher, A., & Savino, M. (2006). A world of difference: A global survey of university league tables. Retrieved April 2, 2006, from