XU CHUNYAN, WANG JICHENG: Think tank rankings are not a talent show

By / 03-21-2014 / Chinese Social Sciences Today

Xu Chunyan

Wang Jicheng

 

In the summer of 2012 we produced an article intro­ducing the ranking system, process and impact of the Global Go To Think Tank In­dex (GGTTI) report published by Professor James G. McGann from the University of Pennsylvania. In 2013 we then produced another article encouraging people not to be misled by McGann’s GGTTI report. In January 2014, McGann released the GGTTI report for the seventh year running. Looking at the new report however, we could not help but notice that some of the inherent flaws in the methodol­ogy which we had discussed when we exchanged views face to face with McGann in Beijing last year have not been overcome (including a low return rate on survey sam­ples, opaque sample distribution, and the allegedly objective and quantitative methodology being replaced in practice by subjective judgments).O ther problems which were detected by researchers from the World Bank are still persistent, includ­ing an ambigu­ous definition of a think tank, a discrepancy between method­ology and actual survey logic, and easily avoidable contradictions caused by sloppy investigation and examination.
 
Despite there being so many problems with McGann’s method, in China, a trend of following and copying it has emerged. Some in­stitutes have launched a Chinese version of the “ranking of think tanks” based on his framework, which has inevitably inherited the same problems of ambiguous concepts, vague categorizations, rough surveys and a lack of standardization of reports. On the whole, this situa­tion shows a ten­dency towards impulsiveness in domestic research.
 
An ambiguous definition
One unsolved problem with the 2013 GGTTI report is that is has an ambiguous concept of what consti­tutes a think tank, and still treats certain donor organizations(for in­stance the Overseas Development Institute) and international politi­cal organizations (for instance Am­nesty International) as think tanks; what’s more, some institutes which are not think tanks have been in­cluded into the rankings, including government-affiliated research departments, for-profit consulting institutions, non-profit think tanks, non-governmental organizations, and research centers in universi­ties. What is worse, universities are included into the ranking directly as think tanks, for instance the Chi­nese National Defense University is listed among the “Top Defense and National Security Think Tanks”. What is even more ridiculous is that in some cases a think tank is included in the same ranking as its own subordinate institute, with the latter being treated as an independent think tank as well (for example the Chinese Academy of Social Sciences and its subordinate, the Institute of World Economics and Politics).
 
This confusing situation is now also replicated in Chinese think tank reports. For example, a cate­gory called “university think tanks” lists Peking University, Tsinghua University and other universities as candidates for think tank rank­ing, conceptually regarding univer­sities and colleges as think tanks.
 
Doubtable evaluation methods and criteria
The 2013 GGTTI report has made a significant improvement in the nomination and selection process compared with the 2006 report. However, with regards to the evaluation method, the 2013 GGTTI still seems to be making false claims in order to impress people. In the survey e-mails, Mc­Gann claims to assign a score ac­cording to 25 different indexes, but it would not have been feasible for his research team comprising only himself and 10 research interns to gain all the data for the 25 indexes for all the think tanks, and it would have been even less possible for the almost 2000 experts in charge of the evaluations to understand fully and use these 25 indexes to correctly assign the scores. Despite the advocacy of objective and quantitative evaluation criteria, the actual evaluation method used is subjective and based upon impres­sions. Moreover, the return rate for the survey is low, and the infor­mation on the nationality and the distribution of the professions and positions of the respondents is not transparent.
 
The same kind of problem is now occurring with Chinese think tank rankings. Although as many as 16 indexes for quantitative evaluation falling into 4 categories are listed, the process of collecting data for most indexes is too subjective. For example, the index of the “number of times and level of leadership instruction for the research results of think tanks” used for evaluating the policy (key) impact of think tanks requires solid basic work and extensive personal relation­ships. In practice however, instead of objective investigation and data collection for these 16 indexes, the questionnaire survey is adopted to obtain a subjective evaluation from the respondents, and then the scores are calculated based on subjective impressions. Therefore, the announced evaluation criteria and the actual method used are far apart.
 
Internationally, many scholars have highlighted the lack of standardization within McGann’s GGTTI report. A standardized investigation and research report should release statistics for the designated and actual regional distribution of the survey samples (including for instance the area, country, gender, profession and position), so that readers can judge the representativeness of the samples. For research results relying on nomination by expert panels, it is necessary to provide data on the regional distribution of the expert panelists. Seen from the response rate deduced by the over-general data provided by the GGTTI reports over the years, the return rate of the questionnaires for the rankings in 2012 was only 14.9%, with only 1,950 being returned out of the 13,000 released. This rate can be considered to be lower than the designated rate, which will damage the representativeness of the research results and in turn seriously distort the rankings.
 
The fragmentary data from McGann’s 2013 GGTTI report shows that, during the first round of the survey, it involved 6,826 institutes and “tens of thousands” of journalists, donors and policy makers; there is, however, no report of the definite number of designated surveys handed out nor of the returns. Similarly, in the second round of the survey there is also no report of both the definite number of designated surveys handed out and the returns. Finally, the report notes that all together the two rounds of the survey received nominations and rankings from 1,950 peer institutes and experts. If we assume that the second round of the survey received 6,826 returns, which is the lowest quantity for the first round, then the calculated return rate is only 14.3%, lower than it was in 2012. This reveals that in the seventh year of McGann’s GGTTI report, the actual participation and return rate is declining.
 
Lack of standards
A standardized research report in the social sciences should at least include basic information on the investigation process, the des­ignated distribution and the actual distribution of samples. To simply release eye-catching ranking re­sults is equivalent to putting on a talent show to entertain the public rather than conducting serious re­search, indicative of that there is an impulsive approach which seeks quick success in research work.
Once again we see Chinese in­stitutes follow suit, not reporting the distribution data for those who are asked to nominate the think tanks, and information such as the quantity of the designated sample, the actual return rate, the regional distribution, and the distribution of professional posi­tions among the respondents in the surveys.
 
Understanding differences
McGann’s GGTTI report has become a mechanism of intel­ligence gathering for the U.S. to collect data on international think tanks and to familiarize themselves with the channels which influence policy-making around the world. This is because the program has the function of conducting a thorough investiga­tion of global think tanks by entic­ing the latter to voluntarily submit data and information in order to be included in the rankings. Inter­nationally there are many critiques of McGann’s think tank ranking, and an alternative ranking is being conceived to reduce the chances of the public being misled by Mc­Gann’s report. Work of this kind is also being conducted by domestic research institutes, but caution needs to be taken in order not to repeat the mistakes mentioned above.
 
In order to evaluate the influ­ence of think tanks, we need to de­velop a deep understanding of the substantial differences between the ways in which Chinese and western think tanks exert their influence, gain an insight into the distinctive categories of Chinese think tanks, and designate an index system for think tank evaluation which is in line with China’s actual conditions. It is necessary to make rigorous use of a representative sample of think tanks in order to examine this index system quan­titatively, with impartiality and in­dependence. It is also important to overcome the impulsive approach which seeks quick success, and standardize the reporting of re­search processes and sample data so as to avoid turning the ranking of think tanks into a talent show.
 
Xu Chunyan is from the School of Labour and Human Resources of China’s Renmin University, and Wang Jicheng is from the Enterprise Research Institute under the Development Research Center of the State Council (DRC) of the People’s Republic of China.
 
The Chinese version appeared in Chinese Social Sciences Today, No. 568, March 7, 2014
                                                        Translated by Du Mei
                                                       
The Chinese link:
http://www.csstoday.net/xueshuzixun/guoneixinwen/88130.html