Abstract
Norms and conventions enable coordination in populations of agents by establishing patterns of behaviour, which can emerge as agents interact with their environment and each other. Previous research on norm emergence typically considers pairwise interactions, where agents’ rewards are endogenously determined. In many real-life domains, however, individuals do not interact with one other directly, but with their environment, and the resources associated with actions are often congested. Thus, agents’ rewards are exogenously determined as a function of others’ actions and the environment. In this paper, we propose a framework to represent this setting by: (i) introducing congested actions; and (ii) adding a central authority, that is able to manipulate agents’ rewards. Agents are heterogeneous in terms of their reward functions, and learn over time, enabling norms to emerge. We illustrate the framework using transport modality choice as a simple scenario, and investigate the effect of representative manipulations on the emergent norms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Note that such equilibria are not necessarily Nash equilibria.
- 2.
See, for example, http://content.tfl.gov.uk/tfl-active-recovery-toolkit.pdf.
- 3.
A similar manipulation (with similar effect) is decreasing the sensitivity of \(g_3\) towards Car.
References
Airiau, S., Sen, S., Villatoro, D.: Emergence of conventions through social learning. Auton. Agent Multi-Agent Syst. 28(5), 779–804 (2014). https://doi.org/10.1007/s10458-013-9237-x
Amin, K., Kale, S., Tesauro, G., Turaga, D.: Budgeted prediction with expert advice. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2490–2096 (2015)
Arce, D.G.: Stability criteria for social norms with applications to the prisoner’s dilemma. J. Conflict Resolut. 38(4), 749–765 (1994)
Arthur, W.B.: Inductive reasoning and bounded rationality. Am. Econ. Rev. 84(2), 406–411 (1994)
Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002). https://doi.org/10.1023/A:1013689704352
Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: the adversarial multi-armed bandit problem. In: Proceedings of IEEE 36th Annual Foundations of Computer Science, pp. 322–331. IEEE (1995)
Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: The nonstochastic multiarmed bandit problem. SIAM J. Comput. 32(1), 48–77 (2002)
Axelrod, R.: An evolutionary approach to norms. Am. Polit. Sci. Rev. 80(4), 1095–1111 (1986)
Beheshti, R., Ali, A.M., Sukthankar, G.: Cognitive social learners: an architecture for modeling normative behavior. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, pp. 2017–2023 (2015)
Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artif. Intell. 136(2), 215–250 (2002)
Conte, R., Paolucci, M.: Intelligent social learning. J. Artif. Soc. Soc. Simul. 4(1), U61–U82 (2001)
Farago, J., Greenwald, A., Hall, K.: Fair and efficient solutions to the Santa Fe bar problem. In: Proceedings of the Grace Hopper Celebration of Women in Computing (2002)
Franks, H., Griffiths, N., Jhumka, A.: Manipulating convention emergence using influencer agents. Auton. Agent Multi-Agent Syst. 26(3), 315–353 (2012). https://doi.org/10.1007/s10458-012-9193-x
Granmo, O.-C., Berg, S.: Solving non-stationary bandit problems by random sampling from sibling Kalman filters. In: García-Pedrajas, N., Herrera, F., Fyfe, C., Benítez, J.M., Ali, M. (eds.) IEA/AIE 2010. LNCS (LNAI), vol. 6098, pp. 199–208. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13033-5_21
Granmo, O.C., Glimsdal, S.: Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the Goore game. Appl. Intell. 38(4), 479–488 (2013). https://doi.org/10.1007/s10489-012-0346-z
Haynes, C., Luck, M., McBurney, P., Mahmoud, S., Vítek, T., Miles, S.: Engineering the emergence of norms: a review. Knowl. Eng. Rev. 32, 1–31 (2017)
Heckathorn, D.D.: Collective sanctions and the creations of prisoner’s dilemma norms. Am. J. Sociol. 94(3), 535–562 (1988)
Helbing, D., Johansson, A.: Cooperation, norms, and revolutions: a unified game-theoretical approach. PLoS ONE 5(10), 1–15 (2010)
Hu, S., Leung, H.F.: Achieving coordination in multi-agent systems by stable local conventions under community networks. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pp. 4731–4737 (2017)
Kale, S.: Multiarmed bandits with limited expert advice. In: Conference on Learning Theory, pp. 107–122 (2014)
Kittock, J.E.: Emergent conventions and the structure of multi-agent systems. In: Proceedings of the 1993 Santa Fe Institute Complex Systems Summer School, vol. 6, pp. 1–14. Citeseer (1993)
Kuleshov, V., Precup, D.: Algorithms for multi-armed bandit problems. J. Mach. Learn. Res. 1, 1–48 (2000)
Li, L., Chu, W., Langford, J., Schapire, R.E.: A contextual-bandit approach to personalized news article recommendation. In: Proceedings of the 19th International Conference on World Wide Web (WWW), pp. 661–670 (2010)
Mahmoud, S., Griffiths, N., Keppens, J., Luck, M.: Overcoming omniscience for norm emergence in axelrod’s metanorm model. In: Cranefield, S., van Riemsdijk, M.B., Vázquez-Salceda, J., Noriega, P. (eds.) COIN -2011. LNCS (LNAI), vol. 7254, pp. 186–202. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35545-5_11
Mahmoud, S., Griffiths, N., Keppens, J., Luck, M.: Efficient norm emergence through experiential dynamic punishment. In: Proceedings of the 20th European Conference on Artificial Intelligence (ECAI), pp. 576–581 (2012)
Malialis, K., Devlin, S., Kudenko, D.: Resource abstraction for reinforcement learning in multiagent congestion problems. In: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 503–511 (2016)
Marchant, J., Griffiths, N.: Convention emergence in partially observable topologies. In: Sukthankar, G., Rodriguez-Aguilar, J.A. (eds.) AAMAS 2017. LNCS (LNAI), vol. 10642, pp. 187–202. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71682-4_12
Morris-Martin, A., De Vos, M., Padget, J.: Norm emergence in multiagent systems: a viewpoint paper. Auton. Agent Multi-Agent Syst. 33, 706–749 (2019). https://doi.org/10.1007/s10458-019-09422-0
Mukherjee, P., Sen, S., Airiau, S.: Norm emergence under constrained interactions in diverse societies. In: Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 779–786 (2008)
Perreau de Pinninck, A., Sierra, C., Schorlemmer, M.: Distributed norm enforcement via ostracism. In: Sichman, J.S., Padget, J., Ossowski, S., Noriega, P. (eds.) COIN -2007. LNCS (LNAI), vol. 4870, pp. 301–315. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79003-7_22
Salazar, N., Rodriguez-Aguilar, J.A., Arcos, J.L.: Robust coordination in large convention spaces. AI Commun. 23, 357–371 (2010)
Savarimuthu, B.T.R., Purvis, M., Purvis, M., Cranefield, S.: Social norm emergence in virtual agent societies. In: Baldoni, M., Son, T.C., van Riemsdijk, M.B., Winikoff, M. (eds.) DALT 2008. LNCS (LNAI), vol. 5397, pp. 18–28. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-93920-7_2
Schlag, K.H.: Why imitate, and if so, how?: a boundedly rational approach to multi-armed bandits. J. Econ. Theory 78(1), 130–156 (1998)
Seldin, Y., Bartlett, P.L., Crammer, K., Abbasi-Yadkori, Y.: Prediction with limited advice and multiarmed bandits with paid observations. In: Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 280–287 (2014)
Sen, O., Sen, S.: Effects of social network topology and options on norm emergence. In: Padget, J., et al. (eds.) COIN -2009. LNCS (LNAI), vol. 6069, pp. 211–222. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14962-7_14
Sen, S., Airiau, S.: Emergence of norms through social learning. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1507–1512 (2007)
Shivaswamy, P., Joachims, T.: Multi-armed bandit problems with history. In: Artificial Intelligence and Statistics, pp. 1046–1054 (2012)
Shoham, Y., Tennenholtz, M.: On the emergence of social conventions: modeling, analysis, and simulations. Artif. Intell. 94(1–2), 139–166 (1997)
Villatoro, D., Sabater-Mir, J., Sen, S.: Social instruments for robust convention emergence. In: Proceedings of the 22th International Joint Conference on Artificial Intelligence (IJCAI), pp. 420–425 (2011)
Vouros, G.A.: The emergence of norms via contextual agreements in open societies. In: Koch, F., Guttmann, C., Busquets, D. (eds.) Advances in Social Computing and Multiagent Systems. CCIS, vol. 541, pp. 185–201. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24804-2_12
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8, 279–292 (1992). https://doi.org/10.1007/BF00992698
Yang, A., Yang, G.H.: A contextual bandit approach to dynamic search. In: Proceedings of the ACM International Conference on Theory of Information Retrieval (SIGIR), pp. 301–304 (2017)
Yu, C., Zhang, M., Ren, F.: Collective learning for the emergence of social norms in networked multiagent systems. IEEE Trans. Cybern. 44(12), 2342–2355 (2014)
Yu, C., Lv, H., Sen, S., Ren, F., Tan, G.: Adaptive learning for efficient emergence of social norms in networked multiagent systems. In: Booth, R., Zhang, M.-L. (eds.) PRICAI 2016. LNCS (LNAI), vol. 9810, pp. 805–818. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42911-3_68
Zeng, C., Wang, Q., Mokhtari, S., Li, T.: Online context-aware recommendation with time varying multi-armed bandit. In: Proceedings of the 22nd ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 2025–2034 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Levy, P., Griffiths, N. (2021). Convention Emergence with Congested Resources. In: Rosenfeld, A., Talmon, N. (eds) Multi-Agent Systems. EUMAS 2021. Lecture Notes in Computer Science(), vol 12802. Springer, Cham. https://doi.org/10.1007/978-3-030-82254-5_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-82254-5_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82253-8
Online ISBN: 978-3-030-82254-5
eBook Packages: Computer ScienceComputer Science (R0)