A further step forward in measuring journals’ scientific prestige: The SJR2 indicator
Highlights
► A new size-independent indicator of scientific publication prestige, SJR2, is proposed. ► SJR2 takes into account the prestige of the citing scientific publication and its subject closeness. ► The method of computation of SJR2 is described. ► Results of SJR2 are compared with those of a Journal Impact Factor, JIF(3y) and SNIP.
Introduction
It is accepted by the scientific community that neither all scientific documents nor all journals have the same value.1 Instead of each researcher assigning a subjective value to each journal, there has always been strong interest in determining objective valuation procedures. In this regard, it is accepted by the scientific community that, in spite of different motivations (Brooks, 1985), citations constitute recognition of foregoing work (Moed, 2005).
One of the first generation of journal metrics based on citation counts developed to evaluate the impact of scholarly journals is the Impact Factor which has been extensively used for more than 40 years (Garfield, 2006). Nevertheless, different research fields have different yearly average citation rates (Lundberg, 2007), and this type of indicator is almost always lower in the areas of Engineering, Social Sciences, and Humanities (Guerrero-Bote et al., 2007, Lancho-Barrantes et al., 2010a, Lancho-Barrantes et al., 2010b).
Since neither all documents nor all journals have the same value, a second generation of indicators emerged with the idea of assigning them different weights. Rather than an index of popularity, the concept that it was intended to measure was prestige in the sense of Bonacich (1987) that the most prestigious journal will be the one that is most cited by journals also of high prestige. The first proposal in this sense in the field of Information Science was put forward by Pinski and Narin (1976), with a metric they called “Journal Influence”. With the arrival of the PageRank algorithm (Page, Brin, Motwani, & Winograd, 1998) developed by the creators of Google, there have arisen other metrics such as the Invariant Method for the Measurement of Intellectual Influence (Palacios-Huerta & Volij, 2004), the Journal Status (Bollen, Rodríguez, & van de Sompel, 2006), the Eigenfactor (Bergstrom, 2007), and the Scimago Journal Rank (González-Pereira, Guerrero-Bote, & Moya-Anegón, 2010).
Despite the progress represented by this second generation of indicators, they have some features that make them ill-suited for journal metrics:
- •
The scores obtained by scientific journals typically represent their prestige, or their average prestige per document, but this score only makes sense in comparison with the scores of other journals.
- •
The scores are normalized by making them sum to a fixed quantity (usually, unity). The result is that as the number of journals increases the scores tend to decrease, which can lead to sets of indicators that all decrease with time. This characteristic complicates the study of the temporal evolution of scientific journals.
- •
Different scientific areas have different citation habits, and these are not taken into account in these indices, so that neither are the values obtained in different areas comparable (Lancho-Barrantes et al., 2010b). Added to this is that there is no consensus on the classification of scientific journals into different areas (Janssens, Zhang, Moor, & Glänzel, 2009).
In the sciences, it has always been accepted that peer review in a field should be by experts in that same field (Kostoff, 1997). In this same sense, it seems logical to give more weight to citations from journals of the same or similar fields, since, although all researchers may use some given scientific study, they do not all have the same capacity to evaluate it. Even the weighting itself may not be comparable between different fields. Given this context, in a process of continuing improvement to find journal metrics that are more precise and more useful, the SJR2 indicator was designed to weight the citations according to the prestige of the citing journal, also taking into account the thematic closeness of the citing and the cited journals. The procedure does not depend on any arbitrary classification of scientific journals, but uses an objective informetric method based on cocitation. It also avoids the dependency on the size of the set of journals, and endows the score with a meaning that other indicators of prestige do not have.
In the following sections, we shall describe the methodological aspects of the development of the SJR2 indicator, and the results obtained with its implementation on Elsevier's Scopus database, for which the data were obtained from the Scimago Journal and Country Rank website, an open access scientometric directory with almost 19,000 scientific journals and other types of publication (2009).
Section snippets
Data
We used Scopus as the data source for the development of the SJR2 indicator because it best represents the overall structure of world science at a global scale. Scopus is the world's largest scientific database if one considers the period 2000–2011. It covers most of the journals included in the Thomson Reuters Web of Science (WoS) and more (Leydesdorff et al., 2010, Moya-Anegón et al., 2007). Also, despite its only relatively recent launch in 2004, there are already various studies of its
Method
The SJR2 indicator, as also the SJR indicator (González-Pereira et al., 2010), is computed over a journal citation network in which the nodes represent the active source journals, and the directed links between the nodes, the citation relationships among those journals. The main differences with respect to SJR are:
- •
The SJR2 prestige of the citing journal is distributed among the cited journals proportionally both to the citations from the former to the latter (in the three-year citation window)
Statistical characterization
As in González-Pereira et al. (2010), in this section we shall present a statistical characterization of the SJR2 indicator in order to contrast its capacity to depict what could be termed “average prestige” with journals’ citedness per document and the SNIP indicator. The study was performed for the year 2008 since its data can be considered stable. The data were downloaded from the Scimago Journal and Country Rank database (http://www.scimagojr.com) on 20 October 2011. It needs to be noted
Conclusions
Beyond the metrics of the prestige of scientific journals which weight the Citation in terms of the prestige of the citing journal, the present SJR2 indicator solves the problem of the tendency for prestige scores to decrease over time by the use of stochastic matrices. It endows the resulting scores with meaning, and uses the cosine between the cocitation profiles of the citing and cited journals to weight the thematic relationship between the two journals.
The problem of the tendency for the
Acknowledgments
This work was financed by the Junta de Extremadura e Consejería de Educación Ciencia & Tecnología and the Fondo Social Europeo as part of Research Group grant GR10019, and by the Plan Nacional de Investigación Científica, Desarrollo e Innovación Tecnológica 2008e2011 and the Fondo Europeo de Desarrollo Regional (FEDER) as part of research projects TIN2008-06514-C02-01 and TIN2008-06514-C02-02.
References (28)
- et al.
A new approach to the metric of journals scientific prestige: The SJR indicator
Journal of Informetrics
(2010) - et al.
Hybrid clustering for validation and improvement of subject-classification schemes
Information Processing and Management
(2009) Lifting the crown – Citation z-score
Journal of Informetrics
(2007)Measuring contextual citation impact of scientific journals
Journal of Informetrics
(2010)- et al.
Citation influence for journal aggregates of scientific publications: Theory with application to the literature of physics
Information Processing and Management
(1976) Which h-index? – A comparison of WoS, Scopus and Google Scholar
Scientometrics
(2008)Eigenfactor: Measuring the value and prestige of scholarly journals
College & Research Libraries News
(2007)- et al.
Journal status
Scientometrics
(2006) Power and centrality: A family of measures
American Journal of Sociology
(1987)Private acts and public objects: An investigation of citer motivations
Journal of the American Society for Information Science
(1985)
The history and meaning of the journal impact factor
Journal of the American Medical Association
The iceberg hypothesis: Import–export of knowledge between scientific subject categories
Scientometrics
Péter's digital reference shelf
The principles and practices of peer review
Science and Engineering Ethics
Cited by (279)
Bayesian networks supporting management practices: A multifaceted perspective based on the literature
2024, International Journal of Information Management Data InsightsAn artificial intelligence tool misclassifies sport science journals as predatory
2024, Journal of Science and Medicine in SportCurrent issues in tourism: Mitigating climate change in sustainable tourism research
2024, Tourism ManagementThe role of gender and coauthors in academic publication behavior
2023, Research PolicyExploring the research evolution of Papaver somniferum and Cannabis sativa: A bibliometric comparative analysis
2023, Industrial Crops and Products