Abstract
Accurate measurement of institutional research productivity should account for the real contribution of the research staff to the output produced in collaboration with other organizations. In the framework of bibliometric measurement, this implies accounting for both the number of co-authors and each individual’s real contribution to scientific publications. Common practice in the life sciences is to indicate such contribution through the order of author names in the byline. In this work, we measure the distortion introduced to university-level bibliometric productivity rankings when the number of co-authors or their position in the byline is ignored. The field of observation consists of all Italian universities active in the life sciences (Biology and Medicine). The analysis is based on the research output of the university staff over the period 2004–2008. Based on the results, we recommend against the use of bibliometric indicators that ignore co-authorship and real contribution of each author to research outputs.


Similar content being viewed by others
Notes
http://www.cun.it/media/100033/area6.pdf, last accessed on Jan. 23, 2013.
The complete list is accessible at http://attiministeriali.miur.it/UserFiles/115.htm, Last accessed on Jan. 23, 2013.
Mathematics and computer sciences; physics; chemistry; earth sciences; biology; medicine; agricultural and veterinary sciences; civil engineering; industrial and information engineering.
The weighting values for both this indicator and the WFI indicator below were assigned based on the results of interviews with top Italian professors in the life sciences. The values could be changed to suit different practices in other national contexts.
We standardize citations by the median, since as frequently observed in literature (Lundberg 2007), standardization of citations with respect to median value rather than to the average is justified by the fact that distribution of citations is highly skewed in almost all disciplines. However we note that there is not yet agreement among bibliometricians on the most efficient scaling factor.
http://cercauniversita.cineca.it/php5/docenti/cerca.php. Last accessed on Jan. 23, 2013.
www.orp.researchvalue.it. Last accessed on Jan. 23, 2013.
To ensure the representativity of publications as proxy of the research output, the analysis excludes those SDSs (MED/02; MED/43 and MED/47) where less than 50 % of researchers produced a WoS-indexed publication over the period 2004–2008. Further, we exclude MED/48 since the research staff of this SDS is distributed in only seven universities.
References
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.
Abramo, G., D’Angelo, C. A., & Rosati, F. (2013). The importance of accounting for the number of co-authors and their order when assessing research performance at the individual level in the life sciences. Journal of Informetrics, 7(1), 198–208.
Aksnes, D. W., Schneider, J. W., & Gunnarsson, M. (2012). Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods. Journal of Informetrics, 6(1), 36–43.
Batista, P. D., Campiteli, M. G., Kinouchi, O., & Martinez, A. S. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.
Bhandari, M., Einhorn, T. A., Swiontkowski, M. F., & Heckman, J. D. (2003). Who did what? (Mis)perceptions about authors’ contributions to scientific articles based on order of authorship. Journal of Bone and Joint Surgery, 85(8), 1605–1609.
Carbone, V. (2011). Fractional counting of authorship to quantify scientific research output. arXiv, 1106.0114.
D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in large-scale bibliometric databases. Journal of the American Society for Information Science and Technology, 62(2), 257–269.
Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.
Egghe, L. (2008). Mathematical theory of the h- and g-index in case of fractional counting of authorship. Journal of the American Society for Information Science and Technology, 59(10), 1608–1616.
Gauffriau, M., Larsen, P. O., Maye, I., Roulin-Perriard, A., & von Ins, M. (2008). Comparisons of results of publication counting using different methods. Scientometrics, 77(1), 147–176.
He, B., Ding, Y., & Yan, E. J. (2012). Mining patterns of author orders in scientific publications. Journal of Informetrics, 6(3), 359–367.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Science, 102(46), 16569–16572.
Laurance, W. F. (2006). Second thoughts on who goes where in author lists. Nature, 442(7098), 26.
Lundberg, J. (2007). Lifting the crown-citation z-score. Journal of Informetrics, 1(2), 145–154.
Verhagen, J. V., Wallace, K. J., Collins, S. C., & Thomas, T. R. (2003). QUAD system offers fair shares to all authors. Nature, 426(6967), 602.
Wan, J. K., Hua, P. H., & Rousseau, R. (2008). The pure h-index: calculating an author’s h-index by taking co-authors into account. COLLNET. Retrieved from http://eprints.rclis.org/bitstream/10760/10376/1/pure_h.pdf. Accessed Jan. 23, 2013.
Author information
Authors and Affiliations
Corresponding author
Appendix
Rights and permissions
About this article
Cite this article
Abramo, G., D’Angelo, C.A. & Rosati, F. Measuring institutional research productivity for the life sciences: the importance of accounting for the order of authors in the byline. Scientometrics 97, 779–795 (2013). https://doi.org/10.1007/s11192-013-1013-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-013-1013-9