Bi-personal stochastic transient Markov games with stopping times and total reward criterion
Kybernetika, Tome 57 (2021) no. 1, pp. 1-14
Cet article a éte moissonné depuis la source Czech Digital Mathematics Library

Voir la notice de l'article

The article is devoted to a class of Bi-personal (players 1 and 2), zero-sum Markov games evolving in discrete-time on Transient Markov reward chains. At each decision time the second player can stop the system by paying terminal reward to the first player. If the system is not stopped the first player selects a decision and two things will happen: The Markov chain reaches next state according to the known transition law, and the second player must pay a reward to the first player. The first player (resp. the second player) tries to maximize (resp. minimize) his total expected reward (resp. cost). Observe that if the second player is dummy, the problem is reduced to finding optimal policy of a transient Markov reward chain. Contraction properties of the transient model enable to apply the Banach Fixed Point Theorem and establish the Nash Equilibrium. The obtained results are illustrated on two numerical examples.
The article is devoted to a class of Bi-personal (players 1 and 2), zero-sum Markov games evolving in discrete-time on Transient Markov reward chains. At each decision time the second player can stop the system by paying terminal reward to the first player. If the system is not stopped the first player selects a decision and two things will happen: The Markov chain reaches next state according to the known transition law, and the second player must pay a reward to the first player. The first player (resp. the second player) tries to maximize (resp. minimize) his total expected reward (resp. cost). Observe that if the second player is dummy, the problem is reduced to finding optimal policy of a transient Markov reward chain. Contraction properties of the transient model enable to apply the Banach Fixed Point Theorem and establish the Nash Equilibrium. The obtained results are illustrated on two numerical examples.
DOI : 10.14736/kyb-2021-1-0001
Classification : 91A05, 91A50
Keywords: two-person Markov games; stopping times; stopping times in transient Markov decision chains; transient and communicating Markov chains
@article{10_14736_kyb_2021_1_0001,
     author = {Mart{\'\i}nez-Cort\'es, Victor Manuel},
     title = {Bi-personal stochastic transient {Markov} games with stopping times and total reward criterion},
     journal = {Kybernetika},
     pages = {1--14},
     year = {2021},
     volume = {57},
     number = {1},
     doi = {10.14736/kyb-2021-1-0001},
     mrnumber = {4231853},
     zbl = {07396252},
     language = {en},
     url = {http://geodesic.mathdoc.fr/articles/10.14736/kyb-2021-1-0001/}
}
TY  - JOUR
AU  - Martínez-Cortés, Victor Manuel
TI  - Bi-personal stochastic transient Markov games with stopping times and total reward criterion
JO  - Kybernetika
PY  - 2021
SP  - 1
EP  - 14
VL  - 57
IS  - 1
UR  - http://geodesic.mathdoc.fr/articles/10.14736/kyb-2021-1-0001/
DO  - 10.14736/kyb-2021-1-0001
LA  - en
ID  - 10_14736_kyb_2021_1_0001
ER  - 
%0 Journal Article
%A Martínez-Cortés, Victor Manuel
%T Bi-personal stochastic transient Markov games with stopping times and total reward criterion
%J Kybernetika
%D 2021
%P 1-14
%V 57
%N 1
%U http://geodesic.mathdoc.fr/articles/10.14736/kyb-2021-1-0001/
%R 10.14736/kyb-2021-1-0001
%G en
%F 10_14736_kyb_2021_1_0001
Martínez-Cortés, Victor Manuel. Bi-personal stochastic transient Markov games with stopping times and total reward criterion. Kybernetika, Tome 57 (2021) no. 1, pp. 1-14. doi: 10.14736/kyb-2021-1-0001

[1] Ash, E.: Real Analysis and Probability. Academic Press, 1972. | MR

[2] Cavazos-Cadena, R., Hernández-Hernández, D.: Nash equilibria in a class of Markov stopping games. Kybernetika 48 (2012), 1027-1044. | MR

[3] Cavazos-Cadena, R., Montes-de-Oca, R.: Nearly optimal policies in risk-sensitive positive dynamic programming on discrete spaces. Math. Methods Oper. Res. 27 (2000), 137-167. | DOI | MR

[4] Filar, J. A., Vrieze, O. J.: Competitive Markov Decision Processes. Springer Verlag, Berlin 1996. | DOI | MR

[5] Granas, A., Dugundji, J.: Fixed Point Theory. Springer-Verlag, New York 2003. | MR

[6] Hinderer, K.: Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter. Springer-Verlag, Berlin 1970. | DOI | MR

[7] Howard, R. A., Matheson, J.: Risk-sensitive Markov decision processes. Management Sci. 23 (1972), 356-369. | DOI | MR

[8] Kolokoltsov, V. N., Malafayev, O. A.: Understanding Game Theory. World Scientific, Singapore 2010. | DOI | MR

[9] Nash, J.: Equilibrium points in n-person games. Proc. National Acad. Sci. United States of America 36 (1950), 48-49. | DOI | MR

[10] Puterman, M. L.: Markov Decision Processes - Discrete Stochastic Dynamic Programming. Wiley, New York 1994. | DOI | MR

[11] Raghavan, T. E. S., Tijs, S. H., J., O., Vrieze: On stochastic games with additive reward and transition structure. J. Optim. Theory Appl. 47 (1985), 451-464. | DOI | MR

[12] Ross, S.: Introduction to Probability Models. Ninth edition. Elsevier 2007. | MR

[13] Shapley, L. S.: Stochastic games. Proc. National Academy Sciences of United States of America 39 (1953), 1095-1100. | DOI | MR | Zbl

[14] Shiryaev, A.: Optimal Stopping Rules. Springer, New York 1978. | MR | Zbl

[15] Sladký, K., Martínez-Cortés, V. M.: Risk-sensitive optimality in Markov games. In: Proc. 35th International Conference Mathematical Methods in Economics 2017 (P. Pražák, ed.). Univ. Hradec Králové 2017, pp. 684-689.

[16] Thomas, L. C.: Connectedness conditions used in finite state Markov decision processes. J. Math. Anal. Appl. 68 (1979), 548-556. | DOI | MR

[17] Thomas, L. C.: Connectedness conditions for denumerable state Markov decision processes. In: Recent Developments in Markov Decision Processes (R. Hartley, L.|,C. Thomas and D. J. White, eds.), Academic Press, New York 1980, pp. 181-204. | MR

[18] Thuijsman, F.: Optimality and Equilibria in Stochastic Games. Mathematical Centre Tracts, Amsterdam 1992. | MR

[19] Wal, J. Van der: Discounted Markov games: successive approximations and stopping times. Int. J. Game Theory 6 (1977), 11-22. | DOI | MR

[20] Wal, J. Van der: Stochastic Dynamic Programming. Mathematical Centre Tracts, Amsterdam 1981. | MR

[21] Vrieze, O. J.: Stochastic Games with Finite State and Action Spaces. Mathematical Centre Tracts, Amsterdam 1987. | MR

[22] Zachrisson, L.: Markov games. In: Advances in Game Theory (M. Dresher, L. S. Shapley and A. W. Tucker, eds.), Princeston University Press 1964. | DOI | MR | Zbl

[23] Zijm, W. H. M.: Nonnegative Matrices in Dynamic Programming. Mathematisch Centrum, Amsterdam 1983. | MR

Cité par Sources :