Essay: Terry Mart
Citation numbers and the Impact Factor of journals are often used to evaluate the quality and the importance of research. Both quantities have some shortcomings, and people using these indicators should know when and when not to use them.
Measuring the quality of a scientific paper is a difficult and arduous endeavor. One of the best known measures is the Citation Number, which is the total number of references a paper has received in other literature. A related but more controversial number is the Impact Factor, which is an attempt to rate the quality of the journal in which the paper was published. Compiled by the Institute for Science Information, the Impact Factor is the average number of citations for papers published in a particular journal in the past two years. Publishers often use the Impact Factor of their journals for advertising and for evaluating their editorial strategies. The following examples illustrate the shortcomings of these two measures.
In 2004, the SPIRES group at the Stanford Linear Accelerator Center announced the "Topcite Olympics," evaluating the citations received by more than 500,000 nuclear and particle physics papers published between 1950 and 2004 and listed in their database. Based on the nationality of the institutions with which the authors were affiliated, the SPIRES group awarded "medals" to individual countries for each top-cited paper. Perhaps surprisingly, countries like Brazil, Colombia, India, Mexico, Portugal, Spain, and Taiwan made the list by winning a "1000+ medal" each (see table). Do these countries really host some of the best scientists? Nobody knows. The "secret" of these countries’ success is their participation in large international collaborations. While Spain and Portugal received their honors because their scientists work on the international Supernova Cosmology Project, the other five countries benefitted from their scientists’ involvement in the DZero experiment at Fermilab. The United States earned ninety-eight "1000+ medals," received eight medals due to large experimental collaborations.
The SPIRES group also released a list of top-cited authors, based on the total number of citations to papers in the database at that time. The method failed to detect the original paper of Abdus Salam, who received a share of the 1979 Nobel Prize for what is now known as the Glashow-Salam-Weinberg electroweak theory. While the papers by Shelden Glashow and Steven Weinberg had more than 2000 and 5000 citations, respectively, Salam’s paper had no citation count because he published his results in the proceedings of a conference. Fortunately, the Nobel committee did neither look at the Citation Number nor consult the Impact Factor; instead, they read the papers.
Associating the quality of a research paper with the Impact Factor of a journal at the time of publication has its own drawbacks. Gary Walter argued in The Medical Journal of Australia that the quality of published materials can not be constrained by time–the two-year period set by the Institute for Science Information for calculating the Impact Factor is too short. A paper by Steven Weinberg, which was published in a Festschrift honoring Julian Schwinger and eventually published in the journal Physica in 1979, is a good example. Before 1990, the paper had received only 153 citations (see graph). Five years later the number had increased to 608, and today it is above 1500. How is that possible? About ten years after the paper was published, it found application in nuclear physics, dramatically changing the number of citations. Another example is a paper from Oskar Klein, published in Zeitschrift für Physik in 1926. Along with Theodor Kaluza, Klein proposed the unification of Einstein’s gravity with Maxwell’s electromagnetism through the introduction of a fifth dimension. Although the idea quickly faded with the rise of quantum mechanics, interest grew again in the 1980s (see graph). Interest then declined until 1998, when a paper suggested that extra dimensions could be large enough to be detectable.
Both cases reflect that an article published today in a rather insignificant journal could be an important investment in the future. But citations received many years later do not affect the Impact Factor associated with the publication date. Unfortunately, the Impact Factor is often misused as a proxy for assessing the quality of individual papers, for scrutinizing a scientist’s accomplishments before granting research funds or academic promotions, and in some circumstances for evaluating the performance of entire research institutes.
The merits of the Impact Factor are becoming even more questionable. Frank Gannon, in a European Molecular Biology Organization report, predicted that, with advances in information technology, future scientists will no longer care too much where an article is published since quick access will be more important than a high impact journal. The phenomenon is familiar in particle physics. Although there are several fully-electronic journals that have a relatively fast editorial processes, some physicists prefer the arXiv preprint server at Cornell University. I believe that the arXiv lets readers be the real judges of its papers. Unimportant papers are automatically neglected, while important ones survive with many citations. Such a mechanism allows for a very efficient, quick, and democratic system, and authors do not have to fight with anonymous referees. However, I admit that many physicists still prefer refereeing to enhance the quality of papers.
A shortcoming of the Citation Number is that it is only applicable for qualitative analysis in research fields with large numbers of scientists, which may not necessarily be the most important research areas. People argue that important research topics attract many scientists and lead to lots of papers. In my opinion, the concept of "research importance" is sometimes confused with "research trends." For instance, research in fusion theory and technology, which could solve future energy problems, receives less research attention than searching for the Higgs particle or developing string theory. The Citation Number does not take into account the number of people working in a research area and the fact that this number may change over time.
Although citation numbers are not perfect, they still provide guidance for measuring the quality and significance of research papers and identifying trends in research. However, the most important aspect of citation measures is not to use them when they don’t apply. Fortunately, there is a sure way to evaluate the importance of a research article: read the paper.
Terry Mart is head of the theoretical division in Departemen Fisika, Universitas Indonesia, Depok, Indonesia.
Click here to download the pdf version of this article.