Abstract
In order to process the information in the online education system more efficiently, this study designs a multi-modal information processing method for the online education system of college English courses. This study mainly deals with video information, image text information and audio information in the system. Firstly, based on structured processing, the video data stream is divided into organic whole with certain logical structure, and the visual feature information and visual invariant feature are extracted. The multi-branch convolutional neural network is designed and the text features are extracted. Convolutional neural network is used to extract audio features from the system. Finally, a functional model of multi-modal information fusion is designed to realize the fusion processing of multi-modal information. Experimental results show that this method has high data fusion efficiency and timeliness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Behmanesh, M., Adibi, P., Chanussot, J., et al.: Geometric multimodal learning based on local signal expansion for joint diagonalization. IEEE Trans. Sig. Process. 66(5), 129–141 (2021)
Khan, A., Maji, P.: Selective update of relevant eigenspaces for integrative clustering of multimodal data. IEEE Trans. Cybern. 37(8), 1–13 (2020)
Yuan, M., Li, C.: Research on global higher education quality based on BP neural network and analytic hierarchy process. J. Comput. Commun. 09(06), 158–173 (2021)
Li, G., Tan, X., Xiao, H.: Research on knowledge fusion model of decision support system for higher vocational education. Educ. Vocat. (10), 84–91 (2020)
Wang, L., He, Y., Tian, J.: Constructing and verifying a model of integrating multimodal data from online learning behaviors. Distance Educ. China (06), 22–30+51+76 (2020)
Behmanesh, M., Adibi, P., Chanussot, J., et al.: Geometric multimodal learning based on local signal expansion for joint diagonalization. IEEE Trans. Sig. Process. 35(9), 391–405 (2021)
Jiménez-Bravo, M., Marrero-Aguiar, V.: Multimodal perception of prominence in spontaneous speech: a methodological proposal using mixed models and AIC. Speech Commun. 124(11), 28–45 (2020)
Wei, X., Zhao, H.: Research on multi-source and multi-modal big data retrieval method based on mapreduce. Comput. Simul. 38(4), 422–426 (2021)
Li, J., Peng, H., Hu, H., et al.: Multimodal information fusion for automatic aesthetics evaluation of robotic dance poses. Int. J. Soc. Robot. 12(2), 5–20 (2020)
Gao, Y., Chang, H.J., Demiris, Y.: User modelling using multimodal information for personalised dressing assistance. IEEE Access 16(5), 1–12 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Feng, B., Wang, L. (2022). Multimodal Information Processing Method of College English Course Online Education System. In: Fu, W., Sun, G. (eds) e-Learning, e-Education, and Online Training. eLEOT 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 453. Springer, Cham. https://doi.org/10.1007/978-3-031-21161-4_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-21161-4_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21160-7
Online ISBN: 978-3-031-21161-4
eBook Packages: Computer ScienceComputer Science (R0)