Nearly optimal stationary policies in negative dynamic programming | Mathematical Methods of Operations Research
Skip to main content

Nearly optimal stationary policies in negative dynamic programming

  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract.

This work concerns controlled Markov chains with denumerable state space and discrete time parameter. The reward function is assumed to be≤0 and the performance of a control policy is measured by the expected total-reward criterion. Within this context, sufficient conditions are given so that the existence of a stationary policy which is ε-optimal at every state is guaranteed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Manuscript received: December 1997/final version received: December 1998

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cavazos-Cadena, R., Montes-De-Oca, R. Nearly optimal stationary policies in negative dynamic programming. Mathematical Methods of OR 49, 441–456 (1999). https://doi.org/10.1007/s001860050060

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s001860050060