Although valued for its ability to allow teams to collaborate and foster coalitional behaviors among the participants, game theory's application to networking systems is not without challenges.
Distributed Strategic Learning for Wireless Engineers illuminates the promise of learning in dynamic games as a tool for analyzing network evolution and underlines the potential pitfalls and difficulties likely to be encountered. Establishing the link between several theories, this book demonstrates what is needed to learn strategic interaction in wireless networks under uncertainty, randomness, and time delays.
It addresses questions such as: How much information is enough for effective distributed decision making?Is having more information always useful in terms of system performance?What are the individual learning performance bounds under outdated and imperfect measurement?What are the possible dynamics and outcomes if the players adopt different learning patterns?If convergence occurs, what is the convergence time of heterogeneous learning?What are the issues of hybrid learning?How can one develop fast and efficient learning schemes in scenarios where some players have more information than the others?What is the impact of risk-sensitivity in strategic learning systems?How can one construct learning schemes in a dynamic environment in which one of the players do not observe a numerical value of its own-payoffs but only a signal of it?How can one learn "unstable" equilibria and global optima in a fully distributed manner?The book provides an explicit description of how players attempt to learn over time about the game and about the behavior of others.
It focuses on finite and infinite systems, where the interplay among the individual adjustments undertaken by the different players generates different learning dynamics, heterogeneous learning, risk-sensitive learning, and hybrid dynamics.