{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,5]],"date-time":"2024-08-05T04:38:22Z","timestamp":1722832702413},"reference-count":35,"publisher":"World Scientific Pub Co Pte Lt","issue":"08","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Int. J. Artif. Intell. Tools"],"published-print":{"date-parts":[[2018,12]]},"abstract":" In problems involving control of financial processes, it is usually complicated to quantify exactly the state variables. It could be expensive to acquire the exact value of a given state, even if it may be physically possible to do so. In such cases it may be interesting to support the decision-making process on inaccurate information pertaining to the system state. In addition, for modeling real-world application, it is necessary to compute the values of the parameters of the environment (transition probabilities and observation probabilities) and the reward functions, which are typically, hand-tuned by experts in the field until it has acquired a satisfactory value. This results in an undesired process. <\/jats:p> To address these shortcomings, this paper provides a new Reinforcement Learning (RL) framework for computing the mean-variance customer portfolio with transaction costs in controllable Partially Observable Markov Decision Processes (POMDPs). The solution is restricted to finite state, action, observation sets and average reward problems. For solving this problem, a controller\/actor-critic architecture is proposed, which balance the difficult tasks of exploitation and exploration of the environment. The architecture consists of three modules: controller, fast-tracked portfolio learning and an actor-critic module. Each module involves the design of a convergent Temporal Difference (TD) learning algorithm. We employ three different learning rules to estimate the real values of: (a) the transition matrices [Formula: see text], (b) the rewards [Formula: see text] and (c) the resources destined for carrying out a promotion [Formula: see text]. We present a proof for the estimated transition matrix rule [Formula: see text] and showing that it converges when t \u2192 \u221e. For solving the optimization programming problem we extend the c-variable method for partially observable Markov chains. The c-variable is conceptualized as joint strategy given by the product of the control policy, the observation kernel Q(y|s) and the stationary distribution vector. A major advantage of this procedure is that it can be implemented efficiently for real settings in controllable POMDP. A numerical example illustrates the results of the proposed method. <\/jats:p>","DOI":"10.1142\/s0218213018500343","type":"journal-article","created":{"date-parts":[[2018,12,20]],"date-time":"2018-12-20T22:07:29Z","timestamp":1545343649000},"page":"1850034","source":"Crossref","is-referenced-by-count":10,"title":["A Reinforcement Learning Approach for Solving the Mean Variance Customer Portfolio in Partially Observable Models"],"prefix":"10.1142","volume":"27","author":[{"given":"Erick","family":"Asiain","sequence":"first","affiliation":[{"name":"Department of Control Automatics, Center for Research and Advanced Studies Av. IPN 2508, Col. San Pedro Zacatenco, Mexico City, 07360, Mexico"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-5918-4671","authenticated-orcid":false,"given":"Julio B.","family":"Clempner","sequence":"additional","affiliation":[{"name":"Escuela Superior de Fiisica y Matematicas, Instituto Politecnico Nacional Building 9, Av. Instituto Polit\u00e9cnico Nacional San Pedro Zacatenco, 07738, Gustavo A. Madero, Mexico City, Mexico"},{"name":"School of Physics and Mathematics, National Polytechnic Institute, Mexico"}]},{"given":"Alexander S.","family":"Poznyak","sequence":"additional","affiliation":[{"name":"Department of Control Automatics, Center for Research and Advanced Studies Av. IPN 2508, Col. San Pedro Zacatenco, Mexico City, 07360, Mexico"}]}],"member":"219","published-online":{"date-parts":[[2018,12,20]]},"reference":[{"key":"p_1","doi-asserted-by":"publisher","DOI":"10.1287\/mnsc.28.1.1"},{"key":"p_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF02204836"},{"key":"p_4","doi-asserted-by":"publisher","DOI":"10.1287\/moor.12.3.441"},{"key":"p_5","first-page":"541","volume":"199","author":"Madani O.","journal-title":"Florida, USA"},{"key":"p_7","doi-asserted-by":"publisher","DOI":"10.1287\/opre.21.5.1071"},{"key":"p_8","doi-asserted-by":"publisher","DOI":"10.1287\/opre.26.2.282"},{"key":"p_9","doi-asserted-by":"publisher","DOI":"10.1080\/00207727908941584"},{"key":"p_10","doi-asserted-by":"publisher","DOI":"10.1016\/0377-2217(80)90211-8"},{"key":"p_13","doi-asserted-by":"publisher","DOI":"10.1287\/opre.41.3.583"},{"key":"p_14","first-page":"1023","volume":"199","author":"Cassandra A. R.","journal-title":"USA"},{"key":"p_15","first-page":"54","volume":"199","author":"Cassandra A. R.","journal-title":"Rhode Island, USA"},{"key":"p_16","first-page":"146","volume":"200","author":"Feng Z.","journal-title":"Canada"},{"key":"p_17","doi-asserted-by":"publisher","DOI":"10.1287\/opre.1090.0697"},{"key":"p_18","doi-asserted-by":"publisher","DOI":"10.1023\/A:1021830128811"},{"key":"p_19","first-page":"2399","volume":"200","author":"Spaan M.","journal-title":"USA"},{"key":"p_20","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2007.070409"},{"key":"p_21","first-page":"199","volume":"200","author":"Perkins T. H.","journal-title":"Canada"},{"key":"p_22","first-page":"1","volume":"200","author":"Poupart P.","journal-title":"Florida, USA"},{"key":"p_23","first-page":"697","volume":"200","author":"Poupart P.","journal-title":"NY, USA"},{"key":"p_26","first-page":"3","volume":"200","author":"Aberdeen D.","journal-title":"CA, USA"},{"key":"p_27","first-page":"774","volume":"201","author":"Peters J.","journal-title":"Heidelberg"},{"key":"p_28","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2007.11.026"},{"key":"p_30","first-page":"97","volume":"201","author":"Gheshlaghi-Azar M.","journal-title":"Czech Republic"},{"key":"p_34","doi-asserted-by":"publisher","DOI":"10.1142\/S0218213017600065"},{"key":"p_35","doi-asserted-by":"publisher","DOI":"10.1146\/annurev.financial.050808.114428"},{"key":"p_37","doi-asserted-by":"publisher","DOI":"10.1007\/s10690-007-9050-0"},{"key":"p_38","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-9965.2006.00300.x"},{"key":"p_39","doi-asserted-by":"publisher","DOI":"10.1017\/S002210901000044X"},{"key":"p_40","doi-asserted-by":"publisher","DOI":"10.1007\/s00186-010-0301-x"},{"key":"p_41","doi-asserted-by":"publisher","DOI":"10.1007\/s00780-010-0129-5"},{"key":"p_42","doi-asserted-by":"publisher","DOI":"10.1007\/s00780-011-0153-0"},{"key":"p_44","doi-asserted-by":"publisher","DOI":"10.1007\/s11518-014-5260-y"},{"key":"p_46","doi-asserted-by":"publisher","DOI":"10.2307\/2975974"},{"key":"p_48","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2015.02.018"},{"key":"p_49","doi-asserted-by":"publisher","DOI":"10.1109\/TLA.2015.7273806"}],"container-title":["International Journal on Artificial Intelligence Tools"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.worldscientific.com\/doi\/pdf\/10.1142\/S0218213018500343","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2019,8,6]],"date-time":"2019-08-06T17:14:06Z","timestamp":1565111646000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.worldscientific.com\/doi\/abs\/10.1142\/S0218213018500343"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,12]]},"references-count":35,"journal-issue":{"issue":"08","published-online":{"date-parts":[[2018,12,20]]},"published-print":{"date-parts":[[2018,12]]}},"alternative-id":["10.1142\/S0218213018500343"],"URL":"https:\/\/doi.org\/10.1142\/s0218213018500343","relation":{},"ISSN":["0218-2130","1793-6349"],"issn-type":[{"value":"0218-2130","type":"print"},{"value":"1793-6349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2018,12]]}}}