Abstract
The paper describes an algorithm for computing longest common substrings of two strings α 1 and α 2 with one mismatch in O(|α 1||α 2|) time and O(|α 1|) additional space. The algorithm always scans symbols of α 2 sequentially, starting from the first symbol. The RAM model of computation is used.
Similar content being viewed by others
References
Gusfield, D., Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology, Cambridge: Cambridge Univ. Press, 1997. Translated under the title Stroki, derev’ya i posledovatel’nosti v algoritmakh: informatika i vychislitel’naya biologiya, St. Petersburg: Nevskii Dialekt, 2003.
Aho, A.V., Hopcroft, J.E., and Ullman, J.D., The Design and Analysis of Computer Algorithms, Reading: Addison-Wesley, 1976. Translated under the title Postroenie i analiz vychislitel’nykh algoritmov, Moscow: Williams, 2003.
Ukkonen, E., On-Line Construction of Suffix Trees, Algorithmica, 1995, vol. 14, no. 3, pp. 249–260.
Author information
Authors and Affiliations
Corresponding author
Additional information
Original Russian Text © M.A. Babenko, T.A. Starikovskaya, 2011, published in Problemy Peredachi Informatsii, 2011, Vol. 47, No. 1, pp. 33–39.
Supported in part by the Russian Foundation for Basic Research, project no. 09-01-00709-a.
Rights and permissions
About this article
Cite this article
Babenko, M.A., Starikovskaya, T.A. Computing the longest common substring with one mismatch. Probl Inf Transm 47, 28–33 (2011). https://doi.org/10.1134/S0032946011010030
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0032946011010030