Linda Manning, Ph.D. Department of Economics, U. of Ottawa RESEARCH PERFORMANCE MANAGEMENT IN ACADEME



Similar documents
Personal Finance, Financial Planning, and Financial Counseling Publication Rankings

SCHOOL OF ACCOUNTANCY FACULTY EVALUATION GUIDELINES (approved )

School of Accountancy Promotion and Tenure Guidelines Adopted 05/05/11

The ranking of scholarly journals in international human resource management

DEPARTMENT INTELLECTUAL CONTRIBUTIONS STATEMENT

Adopted: May, 2006 (Pub. List revised Nov.2008) PROMOTION AND TENURE CRITERIA AND PROCEDURES SCHOOL OF ACCOUNTING James Madison University

A spreadsheet approach for monitoring faculty intellectual contributions

PROCEDURES AND EVALUATIVE GUIDELINES FOR PROMOTION OF TERM FACULTY DEPARTMENT OF PSYCHOLOGY VIRGINIA COMMONWEALTH UNIVERSITY MARCH 31, 2014

College of Education Clinical Faculty Appointment and Promotion Criteria Provost Approved 11/11/11

Standards for Promotion and Tenure Required by Section 7.12, Regents Policy on Faculty Tenure. Department of Psychology College of Liberal Arts

Tenure and Promotion Criteria and Procedures Department of Computer Science and Engineering University of South Carolina, Columbia, SC 29208

Turku School of Economics: Strategy for

Strategic Plan

Department of Marketing / College of Business Florida State University BYLAWS. Approved by a majority of faculty

Policy on Academic Tracks and Promotions for the School of Nursing (SON) at the American University of Beirut (AUB)

Program Personnel Standards Approval Form. Disciplrne: Nursing. ','J1* )lplll. RTP Committeehair Date i

PHYSICAL THERAPY PROGRAM STANDARDS FACULTY OF PHYSICAL THERAPY PROGRAM Revised 05/18/2016

UNIVERSITY OF ROCHESTER William E. Simon Graduate School of Business Administration. Proposal for a Clinical Faculty Track

Summaries of University Rankings. November 26, 2008

Department of Marketing Promotion and Tenure Guidelines February 2011

The Relevant Standards: Academic Qualification (AQ):

SCHOOL OF URBAN AFFAIRS & PUBLIC POLICY CRITERIA AND PROCEDURES FOR PROMOTION AND TENURE

UNIVERSITY OF SOUTH CAROLINA

SUBJECT TABLES METHODOLOGY

Business Research in Eight Business Disciplines

UNIVERSITY OF MINNESOTA MEDICAL SCHOOL. RESEARCH (W) TRACK STATEMENT Promotion Criteria and Standards PART 1. MEDICAL SCHOOL PREAMBLE

Cover Page. Current Units and Programs that will be Reorganized/Consolidated:

DEPARTMENT OF ACCOUNTANCY

1. Attendance at discipline, program, school, faculty or university meetings, including, for example school assessment board.

Clemson University College of Health, Education, and Human Development School of Nursing Promotion, Tenure, and Appointment Renewal Guidelines

FACULTY OF ENVIRONMENTAL DESIGN (EVDS) Guidelines for Academic Appointment Review and Renewal Section 5 of the APT Manual

FINDING MEANINGFUL PERFORMANCE MEASURES FOR HIGHER EDUCATION A REPORT FOR EXECUTIVES

Department of Curriculum, Instruction, and Media Technology. Reappointment, Promotion, and Tenure Guidelines

Global ranking of knowledge management and intellectual capital academic journals

University of Delaware College of Health Sciences Department of Behavioral Health and Nutrition

A FRAMEWORK FOR FACULTY PERFORMANCE EVALUATION AND MERIT ALLOCATION

TO: Vice-Presidents DATE: April 28, 2009

Washkewicz College of Engineering Requirements and Procedures for Tenure & Promotion

GUIDELINES FOR ACADEMIC PROGRAM REVIEW For self-studies due to the Office of the Provost on October 1, 2015 GRADUATE PROGRAMS

College of Natural and Social Sciences Guidelines on Promotion and Tenure

Department of Child & Family Development Promotion and Tenure Guidelines November 2004

COLLEGE OF BUSINESS AND TECHNOLOGY PERFORMANCE EVALUATION GUIDELINES

UND COLLEGE OF BUSINESS & PUBLIC ADMINISTRATION (CoBPA) AACSB FACULTY QUALIFICATIONS & ENGAGEMENT CRITERIA

QUALITY ASSURANCE POLICY

Need, Supply & Demand of Psychologists in Canada: Follow-up to Canadian Psychological Association Summit

College Entrance - The Importance of a Performance Evaluation

Belk College Policy on Qualified Faculty Status

UNIVERSITY OF DELAWARE. Department of Linguistics & Cognitive Science. Promotion and Tenure Document

Department of Aviation and Supply Chain Management

Teva Pharmaceutical Industries Ltd. Compensation Policy for Executive Officers and Directors

Examining Motivation Theory in Higher Education: An Expectancy Theory Analysis of Tenured Faculty Productivity

FIVE YEAR REVIEWS OF HEALTH SCIENCES ORGANIZED RESEARCH UNITS UNIVERSITY OF CALIFORNIA, SAN DIEGO Supplement to UCSD ORU Policy & Procedures, May 2010

BYLAWS COUNSELING AND CONSULTATION SERVICES (CCS) 1.0 COUNSELING AND CONSULTATION SERVICES MEMBERSHIP

COMPARISON OF CLINICIAN TEACHER AND SALARIED CLINICAL FACULTY PATHWAYS, PSYCHIATRY AND BEHAVIORAL SCIENCES 9/22/14

The University of Toledo College of Medicine and Life Sciences Faculty Tracks for Academic Rank and Criteria for Promotion

Business Accreditation Eligibility Application

Department of Applied Arts and Sciences. University of Montana- Missoula College of Technology

Review of the M.A., M.S. in Psychology

APSU College of Business Policy for Faculty Retention, Tenure, Promotion & Annual Review of Tenured Faculty. For Review by Faculty, August 2013

THE FACULTY OF BUSINESS UNIVERSITY OF VICTORIA FACULTY EVALUATION POLICY

In section 10.a, the Faculty Manual specifies:

Senior Lecturer / Lecturer in International Business / International Entrepreneurship

COLLEGE OF BUSINESS AND TECHNOLOGY FACULTY QUALIFICATIONS AND ENGAGEMENT

College of Engineering Faculty Appraisal System. For 2011/2012

Department of Fashion and Interior Design. Policy on Merit Ratings. Accepted October 21, 2009

THE UNIVERSITY OF NORTH CAROLINA AT GREENSBORO SCHOOL OF NURSING PROMOTION AND TENURE CRITERIA AND PROCEDURES

Journal Impact Factor, Eigenfactor, Journal Influence and Article Influence

Healthy People 2020 and Education For Health Successful Practices in Undergraduate Public Health Programs

Chapter 2 Measuring the Impact of Articles, Journals and Nations

Teva Pharmaceutical Industries Ltd. Compensation Policy for Executive Officers and Directors

Rutgers, The State University of New Jersey School of Nursing Legacy CON Faculty

Deploying Professionally Qualified Faculty: An Interpretation of AACSB Standards An AACSB White Paper issued by:

The Colloquia Doctoral Competencies and Learning Outcomes

COLLEGE OF ARTS AND LETTERS GRADUATE TEACHING CERTIFICATION STANDARDS. Updated October 21, 2015

The situation of psychology publications in Lithuania

TENURE AND PROMOTION CRITERIA DEPARTMENT OF ELECTRICAL ENGINEERING

A SHORTAGE IN BUSINESS DOCTORAL CANDIDATES IN ACADEMIA: CONTRIBUTING FACTORS, IMPACT ON ACADEMIA AND REMEDIATION

PH.D. IN BUSINESS ADMINISTRATION

Department of History Policy 1.1. Faculty Evaluation. Evaluation Procedures

History and Purpose of the Standards for Educational and Psychological Testing

The Work Environment for Tenure-Track/Tenured Faculty at the University of Maryland. ADVANCE Research and Evaluation Report for CMNS

UNIVERSITY OF MANITOBA INTERNATIONAL STRATEGY MANITOBA S GATEWAY TO THE WORLD. September 2014

Department of Physical Medicine and Rehabilitation Promotion and Tenure Guidelines

If you have any additional questions or need additional information, do not hesitate to contact me. The approved proposal is attached.

Nomination and Selection of External Consultants for Graduate Program Reviews

Research Productivity in Counseling Psychology: An Update

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS

School of Architecture and Interior Design Criteria for Reappointment, Promotion, and Tenure

College of Engineering, Forestry & Natural Sciences Conditions of Faculty Service Guidelines for Annual Evaluation, Promotion, and Tenure

The Society of Architectural Historians Guidelines for the Promotion and Tenure of Architectural Historians

The Society of American Archivists and Graduate Archival Education: A Sneak Preview of Future Directions

EVALUATION OF DEPARTMENT CHAIR

US News Rankings (2015) America s Best Graduate Schools Published March 2014

UNIVERSITY OF SOUTH CAROLINA SCHOOL OF LAW DEPARTMENT OF CLINICAL LEGAL STUDIES TENURE AND PROMOTION PROCEDURES AND STANDARDS

ADVANCING KNOWLEDGE. Research. iae.alberta.ca/capr 87. Alberta s Innovation System

Ph.D. in Education Student Handbook

8/28/13 UNIVERSITY OF TEXAS MD ANDERSON CANCER CENTER ODYSSEY FELLOWSHIP PROGRAM AND OUTSTANDING RESEARCH PUBLICATION AWARDS GUIDELINES

UB Department Chair Leadership Program Workshop # 3 November 24, 2015

AACSB-Qualification and Participation Guidelines AACSB-QUALIFICATION STANDARDS AACSB-QUALIFICATION: INITIAL AND MAINTENANCE REQUIREMENTS

Transcription:

Asac2005 Toronto, Ontario May 28-31, 2005 Linda Manning, Ph.D. Department of Economics, U. of Ottawa Jacques Barrette, Ph.D. School of Management, U. of Ottawa RESEARCH PERFORMANCE MANAGEMENT IN ACADEME The challenges of research performance management in academia are explored for schools of business with researchers from diverse cultures and origins, who speak and publish in different languages, disciplines, and methodologies. A method of assessment of research productivity and quality with both quantitative and qualitative approaches is proposed. A case study demonstrates how the method can be applied. Introduction Competition for student enrolments among Business Schools in Canada is increasingly intense, and research quality and productivity are important metrics of a School s quality and its ability to attract high quality students (Tracy and Waldfogel, 1997) and faculty (Borokhovich and al. 1995). Business school accrediting agencies (e.g., AACSB and EQUIS) rely on measures of research quality as a component of their assessment of the quality of a business program. Academic papers publish comparisons of research output of departments or faculties (e.g., Erkut 2002), and annual rankings that include measurements of research output are computed and reported by the popular press. 2. The quality of journal in which scholars publish affects the school s reputation for research (Erkut 2002; Brown and Huefner 1994; and Kirkpatrick and Locke 1992). Consequently, there is increased pressure for publication in high-tier journals as a competitive strategy, (Theoharakis and Hirst 2002), and some business schools have even implemented an incentive reward program to improve the quantity and/or quality of publications. Assessment of research performance of individual researchers is also an important component for hiring, tenure and promotion, as well as funding decisions (Meho and Sonnenwald 2000). A common measure of the research productivity of a researcher is the number of publications in refereed journals (Brown and Huefner 1994 and Kirkpatrick and Locke 1992), and research quality is measured in terms of journal quality (Theoharakis and Hirst 2002), as it is assumed that publications in higher-tier journals make a significantly greater contribution (Carter 2002). Journal quality is often represented by some kind of journal ranking mechanism. No ranking system or method of ranking is generally accepted for use across business schools. The disciplinary diversity inherent in any business school faculty makes it a challenge to 2 Examples are Business Week, U.S. News and World Report, Financial Times, Maclean s and Canadian Business 51

develop a system that is considered fair and inclusive, but this is an even greater challenge for business schools with a multi-cultural faculty where researchers publish not only in diverse disciplines, but also in a number of languages, countries, and methodologies. Though business schools world-wide face the challenge of assessing the quality of research to meet internal and external objectives, no published journal ranks that take into account this diversity were found. The purpose of this article is two-fold: to describe the challenges and propose a solution embedded in the literature for ranking journals in management in a multicultural, multilingual, and multidisciplinary environment; and to demonstrate how the proposed method can be applied with a case study. The School of Management at the University of Ottawa developed a research incentive program to promote publication in higher-tier journals. Monetary awards provide the incentive and a systematic ranking methodology was developed to rank journals. Research Incentive Programs As in any performance management system, a research incentive program must be also be transparent, and perceived as objective, just, and fair by the members of the Faculty, particularly when monetary rewards are tied to performance assessment. Without the perception of equity, researchers are likely to respond perversely to such a program (Gosselin et Murphy, 1994). There are a number of conditions for successful implementation of an individual reward system. The academic environment satisfies the conditions suggested by Thériault and St-Onge (2000) such as (a) the incumbents have some discretionary latitude and control over their own work; (b) the environment is one in which competition can affect performance; (c) there are differences in output from one person to another; (d) the workers are a competitive asset for the organization; and (e) individual output is identifiable and measurable. Journal ranking can be an effective measure of individual output, but the most appropriate measure of journal quality is difficult to determine. Methods of ranking that are objective (quantitative) such as the number of times the journal is cited and the article acceptance rate, and subjective (qualitative) such as the reputation of the editor and the rigor of the peer-review process (Jones 2003) exist. However, neither approach by itself is sufficient for accurately assessing journal quality, and multiple criteria are needed (Carter 2002). Quantitative Measures of Journal Quality. The perception that measures of quality are objective is essential to ensure acceptance by the community of researchers. Quantitative criteria are perceived as fair because decisions are understandable, consistent over time and by different decision makers. The most common quantitative approach used to evaluate journal quality is the citation approach (Mabry and Sharplin, 1985; and Vokurka, 1996), using data from the Social Sciences Citation Index (SSCI) and the Sciences Citation Index (SCI), published annually by the Institute for Scientific Information. Together these indexes cover over 7,000 journals in more than 150 scientific disciplines and 50 social sciences disciplines. Citation information is available in the form of total cites (the number of total citations to articles in the journal for the current year), the immediacy index (the average number of times current articles in a journal are cited during the year they were published), cited half-life (the 52

number of years, going back from the current year, that account for 50% of the total citations given by the journal in the current year), and impact factor (the average number of times articles published in a journal in the two previous years were cited in a particular year), reported with three decimal digits. Although it has been criticized in all disciplines, the impact factor tends to be the preferred indicator (Anseel, De Baene and Brysbaert 2004), especially when comparing journals in the same field (Zhou, Ma and Turban 2001; and Carter, 2002), and is increasingly being used by funding agencies, and search committees (Jones 2003). While citation analysis claims to provide researchers with an effective indicator for assessing the relative quality of journals (Meho and Sonnenwald, 2000), there are many limitations to consider when using it to rank business-related journals. Only three business-related categories are included in the SSCI (business and finance, management, and marketing), some of the highest regarded journals in management are not included in the database (Vokurka 1996), and only one business-related category is included in the SCI (operations research and management science). Citation-based ranking understates the relative importance of new journals in the field, and tends to ignore journals for which only some niche area is of interest (Holsapple, Johnson, Manakyan and Tanner 1995; Marx, Schier and Wanitschek 2001; and Vokurka 1996), and many of the most relevant journals in management are not included (Carter, 2002). Citation databases cover mainly English journals published in the United States, are not comprehensive in coverage, and many have technical problems (Meho and Sonnenwald, 2000; and Marx, Schier and Wanitschek 2001). English-language journals score higher than those in other languages, American journals tend to have higher impact factors than European journals, and the most prestigious journals in different specialist areas may have very different impact factors. Finally, Neuberger and Counsell (2002) found some limits of the journal impact factor that are particularly relevant to the challenge of ranking journals to assess productivity in a multicultural, multilingual, multidisciplinary environment. Qualitative Measures of Journal Quality. The qualitative approach, referred to in the literature as the perception analysis approach or peer ranking, is a method whereby opinions are solicited from experts such as deans, department heads, renowned practitioners, and/or academic staff members. A number of models have been used for soliciting and consolidating subjective opinions (Zhou, Ma and Turban 2001), some using global criteria (e.g., journal prestige) and others using multiple quality metrics (e.g., journal familiarity, and readership). Some researchers have used surveyed experts, requesting a single numerical value (e.g., 1-5) and calculated an index to determine a rank (Benjamin and Brenner 1974). Others used a magnitude estimation procedure to calculate a geometric mean and a ratio-scale quality ranking (Hull and Wright). Some surveys limit their sample to associate and/or full professor, respondents from leading schools, or schools that grant Ph.D.s. Opinion surveys tend to avoid the difficulties of citation studies (Brown and Huefner 1994) and can accommodate the perspectives of individuals with different research interests or who originate from different geographic locations. However, perception analysis also has been subjected to a number of criticisms. Respondents may lack sufficient knowledge on which to base judgments. Since the ranking of journals can affect one s academic standing, perceptual ranking surveys have been accused of suffering from inherent respondent bias such as self-serving and pre- 53

disposition bias (Theoharakis and Hirst 2002). Perceptual surveys have relied primarily on US sampling frames (Carter 2002). Peers in a different cognitive domain may evaluate a given scientific contribution rather differently, as their evaluation can be influenced by their level of knowledge and research biases (Meho and Sonnenwald, 2000). As academics become more specialized in their interests, their ability to evaluate research and their familiarity with the quality of research journals outside their specialty are likely to deteriorate (Brown and Huefner 1994). It seems that neither the quantitative nor the qualitative approach by itself is best for evaluating journal quality, and that multiple criteria are needed to accurately assess journal quality (Carter 2002). In a study of evaluation methods of individual senior scholars, Meho and Sonnenwald (2000), found that quantitative methods (citation ranking and citation content analysis) and qualitative methods (book reviews, and peer ranking) perform similarly. This is reassuring because it suggests that using qualitative methods is valid when quantitative methods are not absent. The major challenges in combining quantitative and qualitative methods are how to deal with incomplete subjective or objective information and how to transform evaluations into a single journal rank (Zhou, Ma and Turban 2001). We present here a case study that demonstrates how these challenges can be met as part of an incentive program to improve productivity and quality of research in a Canadian business school. A CASE STUDY: The School of Management at the University of Ottawa A search of business programs revealed two research incentive programs in business schools that are highly diverse culturally and in language and methodologies: Haute Études Commerciale (HEC) in Montréal, and École Supérieure des Sciences Économiques et Commerciale (ESSEC) in France. Each incentive program rewards research activities besides refereed journal publications, and at the time they were found (2001), both award systems were based on journal ranks determined by citation indices. 3 At the School of Management at the University of Ottawa, a research incentive program was motivated by a study conducted by the one of the authors to compare the quantity and quality of publications at the School and non-ph.d. granting business schools ranked in the Financial Times Top 100 Business Schools. Using HEC s 2002 list of ranked journals, the study found that the annual publication rate per faculty member was on par with the average of the top 100 schools, reflecting the faculty s commitment to research even with a relatively high teaching load (5 per year) and increasingly large class sizes. While the numbers of publications were comparable, however, the study revealed that the quality of journals was lower than the comparison group, and that researchers in the comparison group published in more journals ranked A and B than did those in the School of Management. To stimulate productivity, the School of Management developed a 3-year monetary award program in 2003 to reward high quality output with a goal to increase the number of scholarly, refereed publications in B journals. To operationalize the program a list of 3 We note that HEC program has changed significantly, and they no longer use the journal rank as the basis of their research incentive program. However, at the time of development of the School of Management s program, the HEC ranked journal list was a relevant source of data for meeting the objective of the School s incentive program, which is to promote publication in high-tier scholarly journals. 54

ranked journals was developed to determine who would receive the award. It is important that the goal be reachable, and that all researchers have access to the rewards. The researchers at the School are a diverse group from Asia and India, Europe and North America publishing and teaching in different languages, working in a wide range of disciplines, and conducting research with different (i.e., quantitative and qualitative) methodologies. The attributes of the School that make it a unique and powerful learning environment for students and a site for rich, diverse research also create challenges for developing a transparent reward system that values research in different disciplines, methodologies and languages. It was decided that the approach to ranking journals would be a combination of quantitative and qualitative The Rewards Policy and the Journal Ranking System The reward system satisfies many criteria stipulated by Thériault and St-Onge (2000) that insure an equitable procedure. Transparency is assured by open communication of the policy and the evaluation procedure, as well as publication of the list of journals and the justification for their ranking. The process is standardized and applied uniformly. The ranked journal list is fixed from one year to the next. There is an appeal process to revise the rank of a journal from year to year. In addition, throughout the development of the program, the process included participation of the researchers and their representatives on the Research Committee. The oversight group for the development of the program and the journal ranking was the Research Committee in the School of Management, including the Director of Research (chair), one faculty representative from each of the Sections of the School, as well as the School s administration officer. The Committee prepared a project plan comprising the following steps. In Step 1, the principle was developed, with a list of criteria to guide the development of the program and journal ranking by the Research Committee and approved by the Dean. The program must: (a) be transparent; (b) be based on faculty input and feedback; (c) be accessible to all tenuretrack and tenured faculty members; (d) address the diversity of culture, language, discipline and methodology of the faculty; (e) be based on relevant, external, and timely sources of ranking input and information; and (f) have a measurable impact on research quality. Step 2 involved an extensive literature review. The research on journal ranking and research award programs was explored in order to identify other research incentive programs, published journal rankings to identify ranking methodologies, and published statistics on journal ranks. Ranked journal lists used by HEC (Montréal) and ESSEC (France) were obtained, along with a number of published lists (e.g., BRD97, NL94 and NL99 discussed in Harzig (2004)). In addition, faculty members were invited to provide published lists from articles in refereed journals. All lists obtained were compiled into a single table. 4 4 Since inception of the School program, the authors discovered a list from French National Committee for Scientific Research which includes journal ranking for economics and management, which will be applied to the 2005 revision of the School s journal list. 55

Step 3 was the identification of ranks and ranking method. There are many classifications of rank used, such as A-D used by HEC, and 0-3 used by ESSEC. It was decided to use the A/B/C designation, but to avoid the use of D as a rank to avoid the negative perception of such a rank, and instead to add categories for scholarly journals that are not peer-reviewed, called NotABC and non-scholarly (even if peer-reviewed) journals called Practitioner. Only journals in the A, B, or C ranks are eligible for the reward program. The determination of rank was conducted in several stages. In stage 1, using the table of journal ranks that had been compiled, ranks were compared and the highest of the ranks assigned to each journal. As an example, if a journal were ranked C by ESSEC but B by HEC, the journal was assigned a rank of B by the School. From these lists, an initial compilation of ranks was created and a call for journals and ranks was sent as part of Step 4. The table of journals, their ranks, and a column presenting the highest of the ranks from all lists was distributed to the faculty in the School of Management. They were asked to review the table, and make suggestions for change and additions. All appeals of rank or ranks for journals without published ranks required documented justification. In keeping with the aim to recognize high quality research irrespective of publication location or language, methodology or discipline, the appeal mechanism uses a combination of quantitative and qualitative measures. Quantitative. Any refereed publication on journal ranking that provides a list of journal ranks may be used to justify a proposed rank for a new journal or to change the rank of a journal already on the list. Researcher submissions to support their appeals came from many domains such as: operations management (Vokurka, 1996), small enterprise (Ratnatunga and Roman, 1997), management (Johnson and Podsakoff, 1994), finance (McNutty and Boekeloo, 1999; Oltheten, Theoharakis and Travlos, 2003;), economics (Baltagi, 2003; and Laband and Piette, 1994), marketing (Theoharakis and Hirst, 2002), accounting (Brown and Huefner, 1994; Hull and Wright, 1990); Information systems (Forgionne and Kohli, 2000). In some cases, published ranks were not available but the journal was listed in the Citation Index. A threshold was established that could be used to justify a journal s rank. Using the School s list of ranked journals (as it stood at Step 4), averages were calculated for A, B, and C journals for four of the measures reported in the Citation Indices (total cites, impact factor, immediacy index and number of articles). Table 1 presents these averages from the 2001 JCR Social Sciences and Science Indices. If, for the journal in question, it could be demonstrated that any two of the journal s measures exceeded the average in a rank, the journal was assigned that rank. For instance, if a journal were listed in the SSCI with an impact factor above 0.810 and an immediacy index above 0.142, the journal would be assigned a rank of B. 56

Table 1 Citation Measures of A, B, and C Journals on Draft of Journal List, 2003 Social Science Edition Averages (2001) A B C Total Cites 2148 835 442 Impact Factor 1.518 0.810 0.615 Immediacy Index 0.247 0.142 0.097 Articles 50 46 42 Science Edition Averages (2001) A B C Total Cites 11980 1825 745 Impact Factor 2.930 1.111 0.528 Immediacy Index 0.727 0.190 0.930 Articles 113 72 58 Qualitative. Very often, because the journals are published outside the U.S. or because they are published in a different language, they do not appear in published ranks or in the JCR lists. In this case, the researcher may request an external review of the journal, and suggest 4 names of reviewers. The Research Committee chooses a maximum of 2 names and adds at least 2 others. A request for review is sent out to the 4 external reviewers along with the rationale for the ranks and a description of criteria for A, B, and C journals (Table 3). In addition, a questionnaire is used to query journal editors on editorial-dependent data (Forgionne and Kohli 2001) such as acceptance rates, readership, editorial policy, and editorial board. 5 In a very small number of cases, when external reviewers failed to respond, the Research Committee made a final decision. Members of the Research Committee reviewed a dossier prepared on these journals, comprising two randomly selected articles from a recent edition of the journal, the results of a questionnaire sent to the journal editor, as well as information available from the journal s web pages, Cabell s Publishing Opportunities (2000), and Citation Index Reports. Using this information and the criteria listed above, the Research Committee discussed the merits of each journal, each bringing to the table their own expertise as well as that of colleagues in their Section. If the dossier is deemed insufficient by the Research Committee, as long as the journal is scholarly and peer-reviewed, it is automatically assigned a rank of C. The final step involves final approval, whereby all documentation submitted is reviewed by the Research Committee, and the final draft of the annual A/B/C list is distributed to the faculty before a final list is presented to the Dean. 6 At the end of the year, researchers report their annual publication output (published or accepted articles in refereed journals), which is examined by the Committee, and awards are made accordingly. Table 2 presents the breakdown of awards by journal rank. Researchers who publish more than one peer-reviewed journal article are eligible for a maximum of $10,000 in any single year. 5 Copy of the questionnaire is available upon request. 6 The research award program and the journal list will soon be available on the School s website. 57

Table 2 Criteria for A, B, and C Journals Rank Criteria Award A Academic journals internationally recognized as among best in the discipline. Journals reviewers among top researchers in discipline. $8,000 B Academic journals internationally distributed and recognized as being high quality, including high quality journals in specialized areas. Reviewers are influential and recognized in the discipline. $6,000 C Academic journals blind peer-reviewed, with articles generally lower quality than those in Group B, and those recognized nationally or less-recognized internationally Discussion and Conclusions $3,000 of research assistant time This paper presents an original methodology for development and implementation of a performance reward program based on journal rank designed to raise the quality of research in a school of management. A unique aspect of the proposed method is that research performance can be evaluated taking into account differences in methodologies used by the researchers (quantitative/qualitative) and in the language in which articles are published (French/English). There are several additional practical implications of the system presented here. First of all on the strategic side, the system is aligned with the strategic objectives of the institution for higher quality publications for accreditation and internal monitoring, and as such the monetary award gives a clear signal as to what is valued by the organization. Second, the use of both quantitative and qualitative methods, and the engagement and active consultation with members of the School promote individual equity in evaluation decisions. The sense of fairness and flexibility is enhanced because the system accommodates additions and modifications to respond to the needs of the institution and to researchers. Third, the awards and public recognition of those who receive them give researchers better information and motivation for planning their research and targeting those journals most likely to be highly regarded in tenure and promotion decisions. Individual accomplishment is recognized, which motivates greater effort to improve (Thériault and St.Onge 2000). The system may also have an impact on retention of better researchers, as they are assured of supplements to their remuneration that is directly related to their performance. This can help to make salaries more competitive and should help to the retention of the best researchers. In future research it will be important to determine whether the incentive program is successful in increasing productivity of the researchers and the quality of their publications. First, we must explore the extent to which researchers actually perceive the system as equitable. It will be important to insure that any changes to the incentive program and especially to the journal ranks are communicated effectively to the members of the School. If the program is perceived as fair, it is expected that the system will motivate researchers to raise the quality of their research and target high-tier journals. In the medium- and long-term, this should translate into an increase in the ratio of publications in B and A journals at the School. 58

Incentive programs are popular in public and private organizations, but are uncommon in academia. However, in business schools the numbers are likely to increase rapidly in the future because of fierce competition for students and faculty. The consequences of a poorly developed program that does not respond to the needs and objectives of the academic environment are particularly disastrous. It is therefore important that these programs fit the cultural context in which they are implemented. A system such as that presented in this paper, which takes into consideration the diversity of culture, language, methodology, and discipline, is a step toward meeting the challenges inherent in using a performance management system in academia. Our research in the future will permit us to confirm whether the program will meet the objective of increasing the productivity and the quality of publications in the School. References Anseel, F., Duyck, W., De Baene, W., and M. Brysbaert (2004). Journal Impact Factors and Self- Citations: Implications for Psychology, American Psychologist, 59(1): 49-51. Baltagi, B.H. (2003). Worldwide institutional and individual rankings in econometrics over the period 1989-1999: An update. Econometric theory, 19: 165-224. Borokhovich, K. A., Bricker, R.J., Brunarski, K.R., & Simkins, B.J. (1995). Finance research productivity and influence. Journal of Finance, 50: 1691-1717. Benjamin, J. J. and Brenner, V. C. (1974). Perceptions of Journal Quality, The Accounting Review, 49(2): 360-362. Brown, L. D. and Huefner, R. J. (1994). The familiarity with and perceived quality of accounting journals: Views of senior accounting faculty in leading U.S. MBA programs. Contemporary Accounting Research, 11 (1-I): 223-250. Cabell's directory of publishing opportunities (various issues) (2000), 8 th Edition, Beaumont, TX: Cabell Pub. Co. Carter, C. R. (2002). Assessing logistics and transportation journals: Alternative perspectives, Transportation Journal, 42(2): 39-50. Forgionne, G. A. and Kohli, R. (2000). A multiple criteria assessment of decision technology system journal quality. Information & Management, 38: 421-435. Gosselin, A. and Murphy, K,R. (1994). L échec de l évaluation de la performance. Gestion, revue internationale de gestion, 19(3) :17-28. Harzig, A-W. (2004). Journal Quality List, http://www.harzing.com/resources.htm#jql 59

Holsapple, C. W., Johnson, L. E. and Manakyan, H. (1995). An empirical assessment and categorization of journals relevant to DSS research, Decision Support Sys 14(4): 359-367. Howard, T. and Nikolai, L. (1983). Attitude Measurement and Perceptions of Accounting Faculty Publication Outlets. The Accounting Review October: 765-776. Hull, R. P. and Wright, G. B. (1990). Faculty perceptions of journal quality: An update. Accounting Horizons, 4(1): 77-98. Johnson, L. J. and Podsakoff, P. M. (1994). Journal influence in the field of management: An analysis using Salancik s index in a dependency network. Academy of Management Journal, 37(5): 1392-1407. Jones, A. W. (2003). Impact factors of forensic science and toxicology journals: what do the numbers really mean? Forensic Science International 133:1-8. Kirkpatrick, S. A. and Locke, E. A. (1992). The development of measures of faculty scholarship. Group and Organization Management, 17: 5-23. Laband, D. N. and Piette, J. P. (1994). The relative impacts of economics journals: 1970-1990. Journal of Economics Literature, 32(2), 640-666. Marx, W., H. Schier, and M. Wanitschek (2001). Citation analysis using online databases: Feasibilities and shortcomings, Scientometrics 52(1): 59-82. Mabry, R. H. and A. D. Sharplin (1985). The relative importance of journals used in Finance Research, Journal of Financial Research, 8(4): 287-296. McNutty, J. E. and Boekeloo, J. (1999). Two approaches to measuring journals quality: Application to finance journals. Journal of Economics and Finance, 23(1): 30-38. Meho, L. I., and D. H. Sonnenwald (2000). Citation Ranking Versus Peer Evaluation of Senior Faculty Research Performance: A Case Study of Kurdish Scholarship, Journal of the American Society for Information Science 51(2):123-138 Neuberger, J. and Counsell, C. (2002) Impact factors: uses and abuses. European Journal of Gastroenterology and Hepatology 14(3): 209-211 Oltheten, E., Travlos, N., & Theoharakis, V., (forthcoming) Faculty perceptions and readership patterns of finance journals: A global view. Journal of Financial and Quantitative Analysis. Ratnatunga, J. & Romano, C. (1997). A citation classics analysis of articles in contemporary small enterprise research. Journal of business Venturing, 12: 197-212. 60

Theoharakis, V. & Hirst, A. (2002). Perceptual differences of marketing journals: A worldwide perspective. Marketing Letters, 13(4): 389-402. Thériault, R. and St-Onge, S. (2000). Gestion de la rémunération: Théorie et pratique. Gaetan Morin Éditeur. Tracy, J. and Waldfogel, J. (1997). The Best Business Schools: A Market Based Approach, Journal of Business 70(1): 1-31. Vokurka, R.J. (1996). The relative importance of journals used in operations management research: A citation analysis. Journal of Operations Management, 14: 345-355. Zhou, D., J. Ma, and E. Turban (2001). Journal Quality Assessment: An Integrated Subjective and Objective Approach, IEEE Transactions on Engineering Management 48(4): 479-490. 61