Dual evaluation needed for reviewing scholarly journals
It is crucial to establish a dual evaluation system that allows qualitative analysis of content and editing as well as categorical analysis to roughly list the rankings of journals, rather than a rigid rating from quantitative calculation. (FILE)
In academia, a paper’s academic value is often judged based on the journal where it is published. In this way, academic evaluation is somewhat of an extension of the evaluation of journals. The rapid development of modern science and technology—especially information technology—has brought new trends in academic journal editing and publishing while standards of academic evaluation are likewise changing.
No more worship of IF
We have seen that the impact factor (IF), once considered a crucial metric, has gradually fallen out of favor in recent years, and it lacks the same influence on journal evaluation that it once had. The idea of impact factor was first proposed in 1955 in the journal Science. After it was adopted in Thomson Reuters’ journal citation report, it became widely used in academia. In China, impact factor is regarded as a key indicator by which to rank journals and measure the academic achievements of scholars.
The use of impact factor to evaluate journals is somewhat controversial. The architects of the index system have admitted that even if an article is published in top journals, there is still a great chance that it may never be cited, so using impact factor to evaluate journals or achievements of scholars is questionable. The architects also acknowledged that the Science Citation Index (SCI) database is merely an assessment tool, rather than a rigorous standard for evaluating a scholar’s scientific research capacity.
In July 2016, Thomson Reuters announced the sale of its Intellectual Property and Science business to private equity funds for $3.55 billion in cash. Its portfolio includes Web of Science, Thomson CompuMark, Thomson Innovation, MarkMonitor, Thomson Reuters Cortellis and Thomson IP Manager. Academics were shocked to find that the SCI was included as well.
The American Society for Microbiology announced on its official website that it no longer supports IF and it has promised not to promote the index system to researchers and to remove the IF logos for all of its journals from the website. The institute argues that IF is a distorted assessment system that undermines science and impedes the exchange of scientific research.
In China, the prestigious Chinese Social Sciences Citation Index (CSSCI) cut out six university journals in January 2017, triggering a heated debate. Some scholars argued that Chinese academics are obsessed with getting published in journals with a higher IF, which has caused a serious outflow of Chinese academic resources and scientific research. They have called for an end to the overemphasis on journal impact factor and the reconstruction of the academic journal evaluation system.
Digital journals on the rise
With the integrated development of new media, the digitization of journals has become the norm. In February 2015, the American Association of Journals rolled out a 360-degree media plan with an integrated data warehouse that provides a holistic view of how the target audience interacts with brands within a certain medium. It also stipulates that the publication could come in five forms, namely, print, digital, online, mobile, video and social media. It is likely that international academic journals will often be presented in the form of web pages or hypertext markup language (HTML), so they can handle more diverse content.
In the future, digital journals will play a leading role, whereas the paper-based versions may exit from the historical stage and only be preserved in museums. It is possible that as technology advances to allow the long-term preservation of digital documents, printed journals might become obsolete.
At the same time, given the modern windfall of information usable in scientific research, printed versions have obvious limitations, prompting more scientists and scholars to turn to online journals and the powerful features of large-scale integrated retrieval platforms.
This is precisely the reason why Google Scholar, Primo, Summons, Baidu and other search engines have become the retrieval systems of choice for academic research. The cross-publisher and cross-library integrated retrieval platforms that feature mass information and flexible, convenient retrieval are by all means the best choice for academic paper retrieval and communication of scientific research resources. Small single journals and publishing agencies face an existential threat, and joining large network platforms becomes necessary for their survival and development.
Evaluation tools in shortage
For a long time, the proper approach to evaluating academic journals—especially those of the humanities and social sciences—has been a thorny subject. In terms of evaluation criteria, many evaluation institutions have different opinions. In fact, there are only two basic indicators of the quality of an academic journal: one is content and the other is editing. The former includes the theoretical significance, application value, contribution to and influence on society. The latter focuses on the fluency of language, the accuracy of punctuation, the precision of citation and writing norms.
A journal is a series of publications. Sometimes, an evaluation covers the journal at a given moment, but this is only a summary of that moment. If we judge the content and editing of a journal at a certain time, the sample is too small to reflect its overall status, thus making the result unreliable.
Regardless of editing quality, it is quite difficult to compare journals across disciplines, both for experts and for the staff of evaluating institutions. After all, no one is an authority in all subjects.
In reality, even for papers of the same discipline, comparisons of academic value and social influence are not easy, let alone papers from different disciplines and different periods. There is a methodological bottleneck, which is a major problem facing appraisal agencies.
Categorical analysis
The evaluation of papers’ significance, value, social influence and contribution falls under the category of qualitative analysis. At present, some scholars prefer quantitative analysis to qualitative, striving to give the public a clear conclusion with numbers. This appears to offer an accurate answer, but in fact it is essentially an illusion, and it puts constraints on the humanities and social sciences that use natural scientific thought and methods. Improper use of quantitative analysis can not only undermine objectivity, but also be very harmful.
In the right conditions, quantitative tools can be applied to qualitative analysis and draw a relatively clear conclusion. But that is not to say that quantitative analysis on any occasion can replace qualitative research. Many phenomena in the humanities and social sciences including economics have specific backgrounds, connotations and forms, so it is hard to fit such complex structures and logic into a coordinate system with a scale of 10 or 100.
Performing basic qualitative analysis, the evaluation of academic journals can be divided into three levels: The first is the description of phenomena, which needs to be true. The second is logical analysis, which requires objectivity. The third level is value judgment, which requires fairness.
The first two are more important than value judgment, and the mentality of judgment is more important than the method. The more subjective it gets in the review, the higher professionalism it requires. This complex process of human cognition is difficult to accurately depict with any quantitative model, measurement score or equivalence value. Even with a seemingly objective result, it is hard to stand up to scrutiny and practice.
Human beings need to adhere to a rational observation of their capacities. In my opinion, the evaluation of academic journals cannot be described by mathematical science, nor can it be studied and expressed by natural science. Well-known journals may publish poor-quality articles, while unknown journals may publish some first-class pieces. A journal now publishes low-quality papers, but later it may publish high-quality ones.
Therefore, it is crucial to establish a dual evaluation system that allows qualitative analysis of content and editing as well as categorical analysis to roughly rank the journals, rather than a rigid rating. That is the logical approach to academic journal evaluation.
Li Jinhua is a research fellow at the Chinese Academy of Social Sciences.