Discrete–Time Stochastic Control and Dynamic Potential Games
There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well–suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self–contained presentation of stochastic dynamic potential games.
Presents a systematic, comprehensive, self-contained analysis of dynamic potential games, which appears for the first time in book formReader-friendly, at a graduate student levelSubstantial number of examples and applications, mainly from mathematical economics