Abstract
The Visual World Paradigm (VWP) is used to study online spoken language processing and produces time-series data. The data present challenges for analysis and they require significant preprocessing and are by nature nonlinear. Here, we discuss VWPre, a new tool for data preprocessing, and generalized additive mixed modeling (GAMM), a relatively new approach for nonlinear time-series analysis (using mgcv and itsadug), which are all available in R. An example application of GAMM using preprocessed data is provided to illustrate its advantages in addressing the issues inherent to other methods, allowing researchers to more fully understand and interpret VWP data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cooper, R.M.: The control of eye fixation by the meaning of spoken language: a new methodology for the real-time investigation of speech perception, memory, and language processing. Cogn. Psychol. 6, 84–107 (1974)
Huettig, F., Rommers, J., Meyer, A.S.: Using the visual world paradigm to study language processing: a review and critical evaluation. Acta Psychol. (Amst)137, 151–171 (2011)
R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2016)
Porretta, V., Tucker, B.V., Järvikivi, J.: The influence of gradient foreign accentedness and listener experience on word recognition. J. Phonetics 58, 1–21 (2016)
Porretta, V., Kyröläinen, A.-J., van Rij, J., Järvikivi, J.: VWPre: tools for preprocessing visual world data (2016)
Wood, S.N.: mgcv: mixed GAM computation vehicle with GCV/AIC/REML smoothness estimation (2016)
van Rij, J., Wieling, M., Baayen, R.H., van Rijn, H.: itsadug: interpreting time series and autocorrelated data using GAMMs (2015)
Tanenhaus, M.K., Spivey-Knowlton, M.J., Eberhard, K.M., Sedivy, J.E.: Integration of visual and linguistic information in spoken language comprehension. Science 268, 1632–1634 (1995)
Nixon, J.S., van Rij, J., Mok, P., Baayen, R.H., Chen, Y.: The temporal dynamics of perceptual uncertainty: eye movement evidence from Cantonese segment and tone perception. J. Mem. Lang. 90, 103–125 (2016)
Allopenna, P.D., Magnuson, J.S., Tanenhaus, M.K.: Tracking the time course of spoken word recognition using eye movements: evidence for continuous mapping models. J. Mem. Lang. 38, 419–439 (1998)
Chambers, C.G., Tanenhaus, M.K., Magnuson, J.S.: Actions and affordances in syntactic ambiguity resolution. J. Exp. Psychol. Learn. Mem. Cogn. 30, 687–696 (2004)
van Rij, J., Hollebrandse, B., Hendriks, P.: Children’s eye gaze reveals their use of discourse context in object pronoun resolution. In: Holler, A., Goeb, C., Suckow, K. (eds.) Empirical Perspectives on Anaphora Resolution. De Gruyter, Berlin (2016)
Kamide, Y., Altmann, G.T.M., Haywood, S.L.: The time-course of prediction in incremental sentence processing: evidence from anticipatory eye movements. J. Mem. Lang. 49, 133–156 (2003)
Järvikivi, J., Pyykkönen-Klauck, P., Schimke, S., Colonna, S., Hemforth, B.: Information structure cues for 4-year-olds and adults: tracking eye movements to visually presented anaphoric referents. Lang. Cogn. Neurosci. 29, 877–892 (2014)
Dussias, P.E., Valdés Kroff, J., Gerfen, C.: Using the visual world to study spoken language processing. In: Jegerski, J., Van Patten, B. (eds.) Research Methods in Second Language Psycholinguistics, pp. 93–126. Routledge, New York (2014)
Baayen, R.H., van Rij, J., Cecile, D., Wood, S.N.: Autocorrelated errors in experimental data in the language sciences: some solutions offered by generalized additive mixed models. In: Speelman, D., Heylan, K., Geeraerts, D. (eds.) Mixed Effects Regression Models in Linguistics. Springer, Berlin (2016)
Hastie, T.J., Tibshirani, R.J.: Generalized Additive Models. Chapman & Hall/CRC, London (1990)
Wood, S.N.: Generalized Additive Models: An Introduction with R. Chapman & Hall/CRC Press, Boca Raton (2006)
Baayen, R.H., Davidson, D.J., Bates, D.M.: Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 59, 390–412 (2008)
Mirman, D., Dixon, J.A., Magnuson, J.S.: Statistical and computational models of the visual world paradigm: Growth curves and individual differences. J. Mem. Lang. 59, 475–494 (2008)
Barr, D.J.: Analyzing “visual world” eyetracking data using multilevel logistic regression. J. Mem. Lang. 59, 457–474 (2008)
Fischer, B.: Saccadic reaction time: Implications for reading, dyslexia, and visual cognition. In: Rayner, K. (ed.) Eye Movements and Visual Cognition, pp. 31–45. Springer, New York (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Porretta, V., Kyröläinen, AJ., van Rij, J., Järvikivi, J. (2018). Visual World Paradigm Data: From Preprocessing to Nonlinear Time-Course Analysis. In: Czarnowski, I., Howlett, R., Jain, L. (eds) Intelligent Decision Technologies 2017. IDT 2017. Smart Innovation, Systems and Technologies, vol 73. Springer, Cham. https://doi.org/10.1007/978-3-319-59424-8_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-59424-8_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-59423-1
Online ISBN: 978-3-319-59424-8
eBook Packages: EngineeringEngineering (R0)