Abstract
Graduate education is a critical period in shaping and fostering graduate students' awareness about the importance of responsible conduct of research and knowledge and skills in doing good science. However, there is a lack of a standard curriculum and assessment framework for graduate students in Taiwan. The aim of this study was to develop a literacy-based research integrity (RI) assessment framework, including five core RI areas: (1) basic concepts in RI, (2) RI considerations in the research procedure, (3) research ethics and research subject protection, (4) publication and authorship, and (5) conflict of interest. The five areas were derived through a comprehensive review of major topics and areas covered in existing research integrity education and training programs and were rated by RI experts with adequate content validity. Test items on the five core areas were developed across three literacy levels: remembering and understanding, applying and analyzing, and evaluating and creating. Seven thousand and eighty-seven graduate-level trainees took an 18-unit RI course covering the five RI areas. Upon finishing the course, trainees completed a computer-based RI assessment randomly selected from 26 RI testing booklets. The design of test items followed the mastery-oriented assessment principles to promote trainees’ learning of RI with adaptive assessment feedback. Results showed that the items in the RI assessment had adequate discrimination and low difficulty level. Thus, the RI assessment can be used to assess a range of trainees’ RI literacy and can provide the most information in identifying trainees in need of more instruction or alternative training. The low guessing parameters also indicated the online RI assessment had an appropriate control of test exposure and cheating prevention. Higher education authorities can use this framework to assess graduate students' RI literacy based on a standard curriculum and prepare them for conversations about the responsible conduct of research for RI culture-building.


Similar content being viewed by others
Availability of Data and Materials
The datasets supporting the conclusions of this article are available from the corresponding author on request.
References
Anderson, E. E., Solomon, S., Heitman, E., DuBois, J. M., Fisher, C. B., Kost, R. G., Lawless, M. E., Ramsey, C., Jones, B., Ammerman, A., & Ross, L. F. (2012). Research ethics education for community-engaged research: A review and research agenda. Journal of Empirical Research on Human Research Ethics: An International Journal, 7(2), 3–19. https://doi.org/10.1525/jer.2012.7.2.3
Anderson, L. W., Krathwohl, D. R., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., Raths, J., & Wittrock, M. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives (1st ed.). Longman.
Antes, A. L. (2014). A systematic approach to instruction in research ethics. Accountability in Research, 21(1), 50–67.
Antes, A. L., & DuBois, J. M. (2014). Aligning objectives and assessment in responsible conduct of research instruction. Journal of Microbiology & Biology Education, 15(2), 108.
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Longmans, Green & Co.
Bulger, R. E., & Heitman, E. (2007). Expanding responsible conduct of research instruction across the university. Academic Medicine, 82(9), 876–878.
Cho, K.-C., & Shin, G. (2014). Operational effectiveness of blended e-learning program for nursing research ethics. Nursing Ethics, 21(4), 484–495.
DeMars, C. E. (2018). Item information function. In B. Frey (Ed.), The SAGE encyclopedia of educational research, measurement, and evaluation (pp. 899–903). SAGE. https://doi.org/10.4135/9781506326139.n360
Dorans, N. J. (2004). Equating, concordance, and expectation. Applied Psychological Measurement, 28(4), 227–246.
Drasgow, F., & Lissak, R. I. (1983). Modified parallel analysis: A procedure for examining the latent dimensionality of dichotomously scored item responses. Journal of Applied Psychology, 68(3), 363.
DuBois, J. M., & Antes, A. L. (2018). Five dimensions of research ethics: A stakeholder framework for creating a climate of research integrity. Academic Medicine: Journal of the Association of American Medical Colleges, 93(4), 550.
DuBois, J. M., & Dueker, J. M. (2009). Teaching and assessing the responsible conduct of research: A Delphi consensus panel report. The Journal of Research Administration, 40(1), 49.
Embretson, S. E., & Reise, S. P. (2013). Item response theory. Psychology Press.
Gonzalez, E., & Rutkowski, L. (2010). Principles of multiple matrix booklet designs and parameter recovery in large-scale assessments. IEA-ETS Research Institute Monograph, 3, 125–156.
Foster, G. C., Min, H., & Zickar, M. J. (2017). Review of item response theory practices in organizational research: Lessons learned and paths forward. Organizational Research Methods, 20(3), 465–486.
Gonzalez, J. M., & Eltinge, J. L. (2007). Multiple matrix sampling: A review. In Proceedings of the section on survey research methods (pp. 3069–3075). American Statistical Association.
Gray, P. W., & Jordan, S. R. (2012). Supervisors and academic integrity: Supervisors as exemplars and mentors. Journal of Academic Ethics, 10(4), 299–311.
Hattie, J. (1985). Methodology review: Assessing unidimensionality of tests and ltenls. Applied Psychological Measurement, 9(2), 139–164.
Kalichman, M. (2014). Rescuing responsible conduct of research (RCR) education. Accountability in Research, 21(1), 68–83.
Kalichman, M. W. (2007). Responding to challenges in educating for the responsible conduct of research. Academic Medicine, 82(9), 870–875.
Kalichman, M. W., & Plemmons, D. K. (2007). Reported goals for responsible conduct of research courses. Academic Medicine, 82(9), 846–852.
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575.
Lievens, F., & Burke, E. (2011). Dealing with the threats inherent in unproctored Internet testing of cognitive ability: Results from a large-scale operational test program. Journal of Occupational and Organizational Psychology, 84(4), 817–824.
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Lawrence Erlbaum.
Luecht, R. M. (2017). Data and scale analysis for credentialing examinations. In S. Davis-Becker & C. W. Buckendahl (Eds.), Testing in the professions: Credentialing policies and practice (pp. 123–152). Routledge.
Lumsden, J. (1976). Test theory. Annual Review of Psychology, 27(1), 251–280.
McCormack, W. T., & Garvan, C. W. (2014). Team-based learning instruction for responsible conduct of research positively impacts ethical decision-making. Accountability in Research, 21(1), 34–49.
Montoya, S. (2018). Defining literacy. GAML Fifth Meeting, 17–18.
Mumford, M. D., Connelly, S., Brown, R. P., Murphy, S. T., Hill, J. H., Antes, A. L., Waples, E. P., & Devenport, L. D. (2008). A sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics & Behavior, 18(4), 315–339.
Olesen, A. P., Amin, L., & Mahadi, Z. (2019). Research ethics: Researchers consider how best to prevent misconduct in research in Malaysian higher learning institutions through ethics education. Science and Engineering Ethics, 25(4), 1111–1124.
Plemmons, D. K., & Kalichman, M. W. (2007). Reported goals for knowledge to be learned in responsible conduct of research courses. Journal of Empirical Research on Human Research Ethics, 2(2), 57–66.
Powell, S. T., Allison, M. A., & Kalichman, M. W. (2007). Effectiveness of a responsible conduct of research course: A preliminary study. Science and Engineering Ethics, 13(2), 249–264. https://doi.org/10.1007/s11948-007-9012-y
Resnik, D. B. (2009). International standards for research integrity: An idea whose time has come? Accountability in Research, 16(4), 218–228.
Resnik, D. B., & Shamoo, A. E. (2011). The Singapore statement on research integrity. Accountability in Research, 18(2), 71–75.
Rizopoulos, D. (2006). ltm: An R package for latent variable modeling and item response theory analyses. Journal of Statistical Software, 17(5), 1–25.
Schmaling, K. B., & Blume, A. W. (2009). Ethics instruction increases graduate students’ responsible conduct of research knowledge but not moral reasoning. Accountability in Research, 16(5), 268–283.
Shamoo, A. E., & Resnik, D. B. (2009). Responsible conduct of research. Oxford University Press.
Sirotnik, K. A. (1974). An introduction to matrix sampling for the practitioner. In W. J. Popham (Ed.), Evaluation in education: Current applications (pp. 453-529). Mccutchan Publishing Corporation.
Slocum-Gori, S. L., & Zumbo, B. D. (2011). Assessing the unidimensionality of psychological scales: Using multiple criteria from factor analysis. Social Indicators Research, 102(3), 443–461.
Smeding, A., Darnon, C., Souchal, C., Toczek-Capelle, M.-C., & Butera, F. (2013). Reducing the socio-economic status achievement gap at university by promoting mastery-oriented assessment. PLoS ONE, 8(8), e71678.
Steneck, N. H. (2007). ORI introduction to the responsible conduct of research. Government Printing Office.
Stone, C. A., Ye, F., Zhu, X., & Lane, S. (2009). Providing subscale scores for diagnostic information: A case study when the test is essentially unidimensional. Applied Measurement in Education, 23(1), 63–86.
Tay, L., Ali, U. S., Drasgow, F., & Williams, B. (2011). Fitting IRT models to dichotomous and polytomous data: Assessing the relative model–data fit of ideal point and dominance models. Applied Psychological Measurement, 35(4), 280–295.
The Drafting Committee of the Taiwan Code of Conduct for Research Integrity. (2020). Taiwan code of conduct for research integrity. University System of Taiwan.
The R Core Team. (2020). R: A language and environment for statistical computing (3.6.3) [Computer software]. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.207.1436
Tippins, N. T. (2009). Internet alternatives to traditional proctored testing: Where are we now? Industrial and Organizational Psychology, 2(1), 2–10.
Todd, E. M., Torrence, B. S., Watts, L. L., Mulhearn, T. J., Connelly, S., & Mumford, M. D. (2017a). Effective practices in the delivery of research ethics education: A qualitative review of instructional methods. Accountability in Research, 24(5), 297–321.
Todd, E. M., Watts, L. L., Mulhearn, T. J., Torrence, B. S., Turner, M. R., Connelly, S., & Mumford, M. D. (2017b). A meta-analytic comparison of face-to-face and online delivery in ethics instruction: The case for a hybrid approach. Science and Engineering Ethics, 23(6), 1719–1754.
van der Linden, W. J. (2004). Optimizing balanced incomplete block designs for educational assessments. Applied Psychological Measurement, 28(5), 317–331. https://doi.org/10.1177/0146621604264870
Van Der Linden, W. J., & Hambleton, R. K. (1997). Item response theory: Brief history, common models, and extensions. In W. J. Van Der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 1–28). Springer.
Watts, L. L., Medeiros, K. E., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford, M. D. (2017). Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics & Behavior, 27(5), 351–384.
Zieky, M., & Perie, M. (2006). A primer on setting cut scores on tests of educational achievement. Educational Testing Service. http://www.edmeasurement.net/8225/Zieky-Perie-2006-SS-primer.pdf
Funding
Funding was provided by Ministry of Education (Grant No. Education and Implementation Mechanism of Research Ethics in Taiwan’s Higher Education) and was partially provided by National Science and Technology Council, R.O.C. (Grant No. NSTC 109-2745-V-009-001-MY2; 110-2511-H-A49-008-MY4; 110-2525-H-007-001-MY4).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflict of interest to declare.
Ethical Standard
This study is exempt from research ethics review under 45 CFR 46.104(d)(1). It uses testing data within normal educational requirements, exclusively for assessment, management, or improvement purposes, based on the ethics review guidelines in the authors’ institute.
Informed Consent
Trainees provided their informed consent by agreeing with an online statement that allowed the researchers to use their de-linked testing data exclusively for further research, including assessment, management, or improvement of the RI instruction before they proceeded to the online RI assessment.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chou, C., Lee, YH. The Development of a Literacy-Based Research Integrity Assessment Framework for Graduate Students in Taiwan. Sci Eng Ethics 28, 66 (2022). https://doi.org/10.1007/s11948-022-00401-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11948-022-00401-5