An assessment into the characteristics of award winning papers at CHI | Scientometrics Skip to main content
Log in

An assessment into the characteristics of award winning papers at CHI

  • Published:
Scientometrics Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

The overall readability of CHI publications is not known. In addition, little is understood about what lexical or demographic characteristics are unique to award winning papers at CHI and if they are significantly different from non award winning papers. We therefore carry out an exploration and assessment into the readability metrics as well as a meta analysis of 382 full papers and 54 notes from the 2014, 2015, 2016 and 2017 editions at CHI. Our results illustrate that notes did not have any significant trends whatsoever. On the other hand, award winning full papers were shown to have lower readability as compared to non award winning full papers. The type of research contribution played an important role; such that award winning full papers were significantly more likely to have a theoretical contribution as compared to non award winning full papers and full papers that presented an artifact as their contribution were more readable than other full papers. Our demographic analysis of authors indicated that the experience of authors nor their region of affiliation were not associated with the likelihood of their full paper being awarded. The experience of authors did not effect the overall readability of full papers however the region of affiliation did have a significant influence on the overall readability of full papers. In conclusion, we speculate on our obtained results through linkages with prior work in readability analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alluqmani, A., & Shamir, L. (2018). Writing styles in different scientific disciplines: A data science approach. Scientometrics, 115(2), 1–15.

    Article  Google Scholar 

  • Armstrong, J. S. (1989). Readability and prestige in scientific journals. Journal of Information Science, 15, 123–124. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=668145.

    Article  Google Scholar 

  • Bartneck, C., & Hu, J. (2009) Scientometric analysis of the chi proceedings. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 699–708). ACM.

  • Best of CHI 2011 Award Winners. (2011). Best of CHI 2011 award winners. http://www.chi2011.org/program/awards.html. Accessed 15 Feb 2018.

  • Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with Python: Analyzing text with the natural language toolkit. Sebastopol, CA: O’Reilly Media Inc.

    MATH  Google Scholar 

  • Brischoux, F., & Legagneux, P. (2009). Don’t format manuscripts. The Scientist, 23(7), 24.

    Google Scholar 

  • Costas, R., Leeuwen, T. N., & Bordons, M. (2012). Referencing patterns of individual researchers: Do top scientists rely on more extensive information sources? Journal of the Association for Information Science and Technology, 63(12), 2433–2450.

    Google Scholar 

  • Coupé, T. (2013). Peer review versus citations—An analysis of best paper prizes. Research Policy, 42(1), 295–301.

    Article  Google Scholar 

  • Dolnicar, S., & Chapple, A. (2015). The readability of articles in tourism journals. Annals of Tourism Research, 52, 161–166.

    Article  Google Scholar 

  • DuBay, W. H. (2004). The principles of readability. Online Submission. https://files.eric.ed.gov/fulltext/ED490073.pdf.

  • Felt, A. P., Reeder, R. W., Almuhimedi, H., & Consolvo, S. (2014). Experimenting at scale with google chrome’s SSL warning. In Proceedings of the 32nd annual ACM conference on human factors in computing systems (pp. 2667–2670). ACM.

  • Gazni, A. (2011). Are the abstracts of high impact articles more readable? Investigating the evidence from top research institutions in the world. Journal of Information Science, 37(3), 273–281.

    Article  Google Scholar 

  • Guerini, M., Pepe, A., & Lepri, B. (2012). Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media (pp. 475–478).

  • Hartley, J., & Cabanac, G. (2016). Are two authors better than one? Can writing in pairs affect the readability of academic blogs? Scientometrics, 109(3), 2119–2122.

    Article  Google Scholar 

  • Hartley, J., Pennebaker, J., & Fox, C. (2003). Abstracts, introductions and discussions: How far do they differ in style? Scientometrics, 57(3), 389–398.

    Article  Google Scholar 

  • Katan, S., Grierson, M., & Fiebrink, R. (2015). Using interactive machine learning to support interface development through workshops with disabled people. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 251–254). ACM.

  • Kitchenham, B. A., Brereton, O. P., Owen, S., Butcher, J., & Jefferies, C. (2008). Length and readability of structured software engineering abstracts. IET software, 2(1), 37–45.

    Article  Google Scholar 

  • Knauff, M., & Nejasmic, J. (2014). An efficiency comparison of document preparation systems used in academic research and development. PloS One, 9(12), e115,069.

    Article  Google Scholar 

  • Landay, J. (2009). I give up on CHI/UIST. Blog entry. Retrieved November 10, 2009.

  • Lee, J., Vicente, K., Cassano, A., & Shearer, A. (2003). Can scientific impact be judged prospectively? A bibliometric test of simonton” s model of creative productivity. Scientometrics, 56(2), 223–232.

    Article  Google Scholar 

  • Lee, S., & French, N. (2011). The readability of academic papers in the journal of property investment & finance. Journal of Property Investment & Finance, 29(6), 693–704.

    Article  Google Scholar 

  • Lei, L. (2016). When science meets cluttered writing: adjectives and adverbs in academia revisited. Scientometrics, 107(3), 1361–1372.

    Article  Google Scholar 

  • Lei, L., & Yan, S. (2016). Readability and citations in information science: Evidence from abstracts and articles of four journals (2003–2012). Scientometrics, 108(3), 1155–1169.

    Article  MathSciNet  Google Scholar 

  • Luo, Y., & Vogel, D. (2014). Crossing-based selection with direct touch input. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2627–2636). ACM.

  • Mick, D. (2005). Inklings: From mind to page in consumer research. In Association of consumer research newsletter (pp. 1–3).

  • Mittal, H., & Gupta, P. (2011). Fate of award winning papers at annual conference of Indian academy of pediatrics: A 13 years experience. Indian Pediatrics, 48, 818–819. https://www.indianpediatrics.net/oct2011/oct-818-819.htm.

    Google Scholar 

  • Mubin, O., Al Mahmud, A., & Ahmad, M. (2017). HCI down under: Reflecting on a decade of the OzCHI conference. Scientometrics, 112(1), 367–382.

    Article  Google Scholar 

  • Nacke, L. E. (2017). How to write and review chi papers. In Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems (pp. 1228–1231). ACM.

  • Okulicz-Kozaryn, A. (2013). Cluttered writing: Adjectives and adverbs in academia. Scientometrics, 96(3), 679–681.

    Article  Google Scholar 

  • Olsen, Jr D. R. (2007). Evaluating user interface systems research. In Proceedings of the 20th annual ACM symposium on user interface software and technology (pp 251–258). ACM.

  • Plavén-Sigray, P., Matheson, G. J., Schiffler, B. C., & Thompson, W. H. (2017). The readability of scientific texts is decreasing over time. bioRxiv 119370.

  • pypdf2. (2016). The PdfFileReader class. https://pythonhosted.org/PyPDF2/PdfFileReader.html. Accessed 15 Feb 2018.

  • Sawyer, A. G., Laran, J., & Xu, J. (2008). The readability of marketing journals: Are award-winning articles better written? Journal of Marketing, 72(1), 108–117.

    Article  Google Scholar 

  • Wainer, J., Eckmann, M., & Rocha, A. (2015). Peer-selected best papers are they really that good? PloS One, 10(3), e0118,446.

    Article  Google Scholar 

  • Wobbrock, J. O., & Kientz, J. A. (2016). Research contributions in human–computer interaction. Interactions, 23(3), 38–44.

    Article  Google Scholar 

  • Young, A. (2016). Reflections on the games & play virtual committee meeting. http://sigchi.tumblr.com/post/143440663545/reflections-on-the-games-play-virtual-committee. Accessed 15 Feb 2018.

  • Zhang, Y., & Hornof, A. J. (2014). Understanding multitasking through parallelized strategy exploration and individualized cognitive modeling. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 3885–3894). ACM.

  • Zimmerman, J. L. (1989). Improving a manuscript’s readability and likelihood of publication. Issues in Accounting Education, 4(2), 458–466.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Omar Mubin.

Appendix: Python code

Appendix: Python code

figure afigure afigure afigure afigure afigure afigure afigure afigure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mubin, O., Tejlavwala, D., Arsalan, M. et al. An assessment into the characteristics of award winning papers at CHI. Scientometrics 116, 1181–1201 (2018). https://doi.org/10.1007/s11192-018-2778-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-018-2778-7

Keywords

Navigation