The complexity of large population multi-agent competitive and cooperative stochastic dynamic systems such as occur in as communication, environmental, and alternative energy systems make centralized control infeasible and classical game theoretic solutions intractable. To confront these problems, and inspired by statistical mechanics, \epsilon-Nash Mean Field stochastic control (aka Nash Certainty Equivalence (NCE) control) was developed in the work of M.Y. Huang, R.Malhame’ and the speaker (2003, 2006, 2007) and independently in that of J. M. Lasry and P.L. Lions (2006, 2007) .
The central idea is that in large population stochastic dynamic games individual agent feedback strategies exist which collectively yield a Nash equilibrium with respect to the counter-intuitively pre-computable behaviour of the mass of the other agents.
The Mean Field Game (MFG) equations consist of a family of (i) Hamilton-Jacobi-Bellman equations which give the Nash value of the game and the best response strategy for each agent, together with (ii) a corresponding family of McKean-Vlasov (MV) Fokker-Planck–Kolomogorov (FPK) equations which generate the probability distribution of the states of the population (i.e. the mean field).
A distinctive feature of the mixed agent system MFG theory (introduced by M. Huang in the LQG case (2010) ), where there are Major (non-asymptotically negligible) and Minor agents, is that the presence of the major agent causes the system’s mean field to become stochastic. Consequently, when Minor agents have only partial (i.e. noisy) observations on the Major agent, Minor agents must recursively estimate both the Major agent’s state and the system’s mean field. Results for this situation in the LQG case will be presented in this talk.
Work with Arman Kizilkale |