Abstract
Most text categorization methods use a common representation based on the bag of words model. Use this representation for learning involve a preprocessing step including many tasks such as stopwords removal and stemming. The output of this step has a direct influence on the quality of the learning task. This work aims at comparing different methods of preprocessing of textual inputs for LASSO logistic regression and LDA topic modeling in terms of mean squared error (MSE). Logistic regression and topic modeling are used to predict a binary position, or stance, with the textual data extracted from two public consultations of the European Commission. Texts are preprocessed and then input into LASSO and topic modeling to explain or cluster the documents’ positions. For LASSO, stemming with POS-tag is on average a better method than lemmatization and stemming without POS-tag. Besides, tf-idf on average performs better than counts of distinct terms, and deleting terms that appear only once reduces the prediction errors. For LDA topic modeling, stemming gives a slightly lower MSE in most cases but no significant difference between stemming and lemmatization was found.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Consultations of the European Commission: https://ec.europa.eu/info/consultations_en.
- 2.
- 3.
- 4.
- 5.
- 6.
Tf-idf is a statistical measure that evaluates how a term is important to a document in a collection, computed as: \(tf\text{- }idf_{t,d} = tf_{t,d} \times idf_{t} \) where \(idf_{t} = log( \frac{n_{documents}}{df_{t}})\) and \(df_{t} = \) number of documents containing t.
References
Jović, A., Brkić, K., Bogunović, N.: A review of feature selection methods with applications. In: 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1200–1205. IEEE (2015)
Gao, W., Hu, L., Zhang, P., Wang, F.: Feature selection by integrating two groups of feature evaluation criteria. Expert Syst. Appl. 110, 11–19 (2018)
Labani, M., Moradi, P., Ahmadizar, F., Jalili, M.: A novel multivariate filter method for feature selection in text classification problems. Eng. Appl. Artif. Intell. 70, 25–37 (2018)
Uysal, A.K., Gunal, S.: A novel probabilistic feature selection method for text classification. Knowl.-Based Syst. 36, 226–235 (2012)
Uysal, A.K., Gunal, S.: The impact of preprocessing on text classification. Inf. Process. Manag. 50, 104–112 (2014)
Vijayarani, S., Ilamathi, M.J., Nithya, M.: Preprocessing techniques for text mining-an overview. Int. J. Comput. Sci. Commun. Netw. 5, 7–16 (2015)
Korenius, T., Laurikkala, J., Järvelin, K., Juhola, M.: Stemming and lemmatization in the clustering of finnish text documents. In: Proceedings of the Thirteenth ACM International Conference on Information and Knowledge Management, pp. 625–633. ACM (2004)
Leopold, E., Kindermann, J.: Text categorization with support vector machines. How to represent texts in input space? Mach. Learn. 46, 423–444 (2002)
Méndez, J.R., Iglesias, E.L., Fdez-Riverola, F., Díaz, F., Corchado, J.M.: Tokenising, stemming and stopword removal on anti-spam filtering domain. In: Marín, R., Onaindía, E., Bugarín, A., Santos, J. (eds.) CAEPIA 2005. LNCS (LNAI), vol. 4177, pp. 449–458. Springer, Heidelberg (2006). https://doi.org/10.1007/11881216_47
Toman, M., Tesar, R., Jezek, K.: Influence of word normalization on text classification. Proc. InSciT 4, 354–358 (2006)
Genkin, A., Lewis, D.D., Madigan, D.: Large-scale Bayesian logistic regression for text categorization. Technometrics 49, 291–304 (2007)
Onan, A., Korukoğlu, S., Bulut, H.: Ensemble of keyword extraction methods and classifiers in text classification. Expert Syst. Appl. 57, 232–247 (2016)
Gentzkow, M., Kelly, B.T., Taddy, M.: Text as data. Technical report, National Bureau of Economic Research (2017)
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
Sukhija, N., Tatineni, M., Brown, N., Moer, M.V., Rodriguez, P., Callicott, S.: Topic modeling and visualization for big data in social sciences. In: 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), pp. 1198–1205 (2016)
Yoon, H.G., Kim, H., Kim, C.O., Song, M.: Opinion polarity detection in twitter data combining shrinkage regression and topic modeling. J. Informet. 10, 634–644 (2016)
Roberts, M.E., Stewart, B.M., Tingley, D., Lucas, C., Leder-Luis, J., Gadarian, S.K., Albertson, B., Rand, D.G.: Structural topic models for open-ended survey responses. Am. J. Polit. Sci. 58, 1064–1082 (2014)
Manning, C.D., Raghavan, P., Schutze, H.: Stemming and lemmatization. In: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)
Porter, M.F.: An algorithm for suffix stripping. Program 14, 130–137 (1980)
Fellbaum, C.: Wordnet. Wiley Online Library (1998)
Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)
Flynn, C.J., Hurvich, C.M., Simonoff, J.S.: Efficiency for regularization parameter selection in penalized likelihood estimation of misspecified models. J. Am. Stat. Assoc. 108, 1031–1043 (2013)
Newman, D., Lau, J.H., Grieser, K., Baldwin, T.: Automatic evaluation of topic coherence. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT 2010, Stroudsburg, PA, USA, pp. 100–108. Association for Computational Linguistics (2010)
Röder, M., Both, A., Hinneburg, A.: Exploring the space of topic coherence measures. In: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM 2015, pp. 399–408. ACM, New York (2015)
Sievert, C., Shirley, K.E.: LDAvis: a method for visualizing and interpreting topics. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this paper
Cite this paper
Mimouni, N., Yeung, T.YC. (2023). Text Preprocessing for Shrinkage Regression and Topic Modeling to Analyse EU Public Consultation Data. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2019. Lecture Notes in Computer Science, vol 13451. Springer, Cham. https://doi.org/10.1007/978-3-031-24337-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-24337-0_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24336-3
Online ISBN: 978-3-031-24337-0
eBook Packages: Computer ScienceComputer Science (R0)