Weak Relation Enforcement for Kinematic-Informed Long-Term Stock Prediction with Artificial Neural Networks | SpringerLink
Skip to main content

Weak Relation Enforcement for Kinematic-Informed Long-Term Stock Prediction with Artificial Neural Networks

  • Conference paper
  • First Online:
Intelligent Computing (SAI 2024)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 1018))

Included in the following conference series:

  • 230 Accesses

Abstract

We propose loss function week enforcement of the velocity relations between time-series points in the Kinematic-Informed artificial Neural Networks (KINN) for long-term stock prediction. Problems of the series volatility, Out-of-Distribution (OOD) test data, and outliers in training data are addressed by (Artificial Neural Networks) ANN’s learning not only future points prediction but also by learning velocity relations between the points, such a way as avoiding unrealistic spurious predictions. The presented loss function penalizes not only errors between predictions and supervised label data, but also errors between the next point prediction and the previous point plus velocity prediction. The loss function is tested on the multiple popular and exotic AR ANN architectures, and around fifteen years of Dow Jones function demonstrated statistically meaningful improvement across the normalization-sensitive activation functions prone to spurious behaviour in the OOD data conditions. Results show that such architecture addresses the issue of the normalization in the auto-regressive models that break the data topology by weakly enforcing the data neighbourhood proximity (relation) preservation during the ANN transformation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 20591
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
JPY 25739
Price includes VAT (Japan)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Beheim, L., Zitouni, A., Belloir, F., de la Housse, C.D.M.: New RBF neural network classifier with optimized hidden neurons number. WSEAS Trans. Syst. 2, 467–472 (2004)

    Google Scholar 

  3. Bender, E.M., Koller, A.: Climbing towards NLU: on meaning, form, and understanding in the age of data. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5185–5198. Association for Computational Linguistics (2020). https://aclanthology.org/2020.acl-main.463

  4. Blodgett, S.L., Madaio, M.: Risks of AI foundation models in education. CoRR abs/2110.10024 (2021). https://arxiv.org/abs/2110.10024

  5. Bommasani, R., et al.: On the opportunities and risks of foundation models. CoRR abs/2108.07258 (2021). https://arxiv.org/abs/2108.07258

  6. Broomhead, D.S., Lowe, D.: Radial basis functions, multi-variable functional interpolation and adaptive networks. Technical report, Royal Signals and Radar Establishment Malvern (United Kingdom) (1988)

    Google Scholar 

  7. Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: a review. Acta Mechanica Sinica 1–12 (2022)

    Google Scholar 

  8. Frasconi, P., Gori, M., Sperduti, A.: A general framework for adaptive processing of data structures. IEEE Trans. Neural Netw. 9(5), 768–786 (1998)

    Article  Google Scholar 

  9. Girosi, F., Poggio, T.: Representation properties of networks: Kolmogorov’s theorem is irrelevant. Neural Comput. 1(4), 465–469 (1989)

    Article  Google Scholar 

  10. Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 729–734 (2005)

    Google Scholar 

  11. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics, Springer New York Inc., New York (2001). https://doi.org/10.1007/978-0-387-21606-5

  12. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. 4, 364–378 (1971)

    Article  MathSciNet  Google Scholar 

  13. Kolmogorov, A.N.: On the representation of continuous functions of several variables by superpositions of continuous functions of a smaller number of variables. American Mathematical Society (1961)

    Google Scholar 

  14. Kurkin, S.A., Pitsik, E.N., Musatov, V.Y., Runnova, A.E., Hramov, A.E.: Artificial neural networks as a tool for recognition of movements by electroencephalograms. In: ICINCO (1), pp. 176–181 (2018)

    Google Scholar 

  15. Kůrková, V.: Kolmogorov’s theorem is relevant. Neural Comput. 3(4), 617–622 (1991). https://doi.org/10.1162/neco.1991.3.4.617

    Article  Google Scholar 

  16. Lake, B.M., Murphy, G.L.: Word meaning in minds and machines (2020). https://arxiv.org/abs/2008.01766

  17. Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)

  18. Marcus, G.: Deep learning: a critical appraisal. CoRR abs/1801.00631 (2018). http://arxiv.org/abs/1801.00631

  19. Meyer, L., Klüpfel, L., Durner, M., Triebel, R.: Robust probabilistic robot arm keypoint detection exploiting kinematic knowledge. In: IROS 2022 Workshop Probabilistic Robotics in the Age of Deep Learning (2022). https://openreview.net/forum?id=x1vvQ1M0MlB

  20. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)

    Article  Google Scholar 

  21. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer 8, 143–195 (1999)

    Article  MathSciNet  Google Scholar 

  22. Raissi, M., Perdikaris, P., Karniadakis, G.: Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019). https://www.sciencedirect.com/science/article/pii/S0021999118307125

  23. Ren, S., Sun, J., He, K., Zhang, X.: Deep residual learning for image recognition. In: CVPR, vol. 2, p. 4 (2016)

    Google Scholar 

  24. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2008)

    Article  Google Scholar 

  25. Selitskiy, S.: Kolmogorov’s gate non-linearity as a step toward much smaller artificial neural networks. In: Proceedings of the 24th International Conference on Enterprise Information Systems, vol. 1, pp. 492-499 (2022)

    Google Scholar 

  26. Selitskiy, S.: “it looks all the same to me": cross-index training for long-term financial series prediction. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P.M., Umeton, R. (eds.) LOD 2023. LNCS, vol. 14505, pp. 348–363. Springer, Cham (2024)

    Chapter  Google Scholar 

  27. Selitskiy, S., Inoue, C., Schetinin, V., Jakaite, L.: The batch primary components transformer and auto-plasticity learning linear units architecture: synthetic image generation case. In: 2023 Tenth International Conference on Social Networks Analysis, Management and Security (SNAMS), pp. 1–9 (2023)

    Google Scholar 

  28. Selitsky, S.: Hybrid convolutional-multilayer perceptron artificial neural network for person recognition by high gamma EEG features. Medicinskiy Vestnik Severnogo Kavkaza 17(2), 192–196 (2022)

    Google Scholar 

  29. Sperduti, A., Starita, A.: Supervised neural networks for the classification of structures. IEEE Trans. Neural Netw. 8(3), 714–735 (1997)

    Article  Google Scholar 

  30. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  31. Vaswani, A., et al.: Attention is all you need. CoRR abs/1706.03762 (2017). http://arxiv.org/abs/1706.03762

  32. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stanislav Selitskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Selitskiy, S. (2024). Weak Relation Enforcement for Kinematic-Informed Long-Term Stock Prediction with Artificial Neural Networks. In: Arai, K. (eds) Intelligent Computing. SAI 2024. Lecture Notes in Networks and Systems, vol 1018. Springer, Cham. https://doi.org/10.1007/978-3-031-62269-4_18

Download citation

Publish with us

Policies and ethics