Approaching (super)human intent recognition in stag hunt with the Naïve Utility Calculus generative model | Computational and Mathematical Organization Theory Skip to main content

Advertisement

Log in

Approaching (super)human intent recognition in stag hunt with the Naïve Utility Calculus generative model

  • Original Paper
  • Published:
Computational and Mathematical Organization Theory Aims and scope Submit manuscript

Abstract

The human ability to utilize social and behavioral cues to infer each other’s intents, infer motivations, and predict future actions is a central process to human social life. This ability represents a facet of human cognition that artificial intelligence has yet to fully mimic and master. Artificial agents with greater social intelligence have wide-ranging applications from enabling the collaboration of human–AI teams to more accurately modelling human behavior in complex systems. Here, we show that the Naïve Utility Calculus generative model is capable of competing with leading models in intent recognition and action prediction when observing stag-hunt, a simple multiplayer game where agents must infer each other’s intentions to maximize rewards. Moreover, we show the model is the first with the capacity to out-compete human observers in intent recognition after the first round of observation. We conclude with a discussion on implications for the Naïve Utility Calculus and of similar generative models in general.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Baggio JA, Janssen MA (2013) Comparing agent-based models on experimental data of irrigation games. In: 2013 winter simulations conference (WSC), IEEE, pp 1742–1753

  • Barnes M, Chen J, Schaefer KE et al (2017) Five requisites for human-agent decision sharing in military environments. In: Savage-Knepshield P, Chen J (eds) Advances in human factors in robots and unmanned systems. Springer, Cham, pp 39–48

    Chapter  Google Scholar 

  • Demiris Y (2007) Prediction of intent in robotics and multi-agent systems. Cogn Process 8(3):151–158. https://doi.org/10.1007/s10339-007-0168-9

    Article  Google Scholar 

  • Elsawah S, Filatova T, Jakeman AJ, et al (2020) Eight grand challenges in socio-environmental systems modeling. Socio-Environ Syst Model 2(16):226. https://doi.org/10.18174/sesmo.2020a16226

    Article  Google Scholar 

  • Epstein JM (2014) Agent_Zero: toward neurocognitive foundations for generative social science, vol 25. Princeton University Press, Princeton

    Book  Google Scholar 

  • Falkenhainer B, Forbus KD, Gentner D (1989) The structure-mapping engine: algorithm and examples. Artif Intell 41(1):1–63. https://doi.org/10.1016/0004-3702(89)90077-5

    Article  Google Scholar 

  • Fiore SM, Wiltshire TJ (2016) Technology as teammate: examining the role of external cognition in support of team cognitive processes. Front Psychol. https://doi.org/10.3389/fpsyg.2016.01531

    Article  Google Scholar 

  • Fiore SM, Rosen MA, Smith-Jentsch KA, et al (2010) Toward an understanding of macrocognition in teams: predicting processes in complex collaborative contexts. Hum Factors 52(2):203–224. https://doi.org/10.1177/0018720810369807

    Article  Google Scholar 

  • Forbus KD, Ferguson RW, Lovett A, et al (2017) Extending SME to handle large-scale cognitive modeling. Cogn Sci 41(5):1152–1201. https://doi.org/10.1111/cogs.12377

    Article  Google Scholar 

  • Freeman J, Baggio JA, Coyle TR (2020) Social and general intelligence improves collective action in a common pool resource system. Proc Natl Acad Sci USA 117(14):7712–7718. https://doi.org/10.1073/pnas.1915824117

    Article  Google Scholar 

  • Garibay I, Oghaz TA, Yousefi N, et al (2020) Deep agent: studying the dynamics of information spread and evolution in social networks. arXiv:2003.11611

  • Gunaratne C, Garibay I (2020) Evolutionary model discovery of causal factors behind the socio-agricultural behavior of the Ancestral Pueblo. PLoS ONE 15(12):e0239922. https://doi.org/10.1371/journal.pone.0239922

    Article  Google Scholar 

  • Gunaratne C, Baral N, Rand W, et al (2020) The effects of information overload on online conversation dynamics. Comput Math Organ Theory 26(2):255–276. https://doi.org/10.1007/s10588-020-09314-9

    Article  Google Scholar 

  • Gunaratne C, Rand W, Garibay I (2021) Inferring mechanisms of response prioritization on social media under information overload. Sci Rep 11(1):1–12. https://doi.org/10.1038/s41598-020-79897-5

    Article  Google Scholar 

  • Jara-ettinger J, Gweon H, Tenenbaum JB, et al (2015) Children’s understanding of the costs and rewards underlying rational action. Cognition 140:14–23. https://doi.org/10.1016/j.cognition.2015.03.006

    Article  Google Scholar 

  • Jara-ettinger J, Gweon H, Schulz LE, et al (2016) The Naïve utility calculus: computational principles underlying commonsense psychology. Trends Cogn Sci 20(8):589–604. https://doi.org/10.1016/j.tics.2016.05.011

    Article  Google Scholar 

  • Jara-Ettinger J, Schulz LE, Tenenbaum JB (2020) The Naïve utility calculus as a unified, quantitative framework for action understanding. Cogn Psychol. https://doi.org/10.1016/j.cogpsych.2020.101334

    Article  Google Scholar 

  • Johnson M, Hofmann K, Hutton T, et al (2016) The malmo platform for artificial intelligence experimentation. IJCAI International Joint Conference on Artificial Intelligence 2016-Janua:4246–4247

  • Kennedy WG, Bugajska MD, Harrison AM, et al (2009) “Like-Me’’ simulation as an effective and cognitively plausible basis for social robotics. Int J Soc Robot 1(2):181–194. https://doi.org/10.1007/s12369-009-0014-6

    Article  Google Scholar 

  • Miranda L (2022) Humans in algorithms, algorithms in humans: understanding cooperation and creating social AI with causal generative models. Electronic Theses and Dissertations, 2020

  • Orr MG, Lebiere C, Stocco A, et al (2018) Multi-scale resolution of cognitive architectures: a paradigm for simulating minds and society. In: Thomson R, Dancy C, Hyder A et al (eds) Social, cultural, and behavioral modeling. Springer, Cham, pp 3–15

    Chapter  Google Scholar 

  • Qi S, Zhu S (2018) Intent-aware multi-agent reinforcement learning. In: 2018 IEEE international conference on robotics and automation (ICRA), pp 7533–7540, https://doi.org/10.1109/ICRA.2018.8463211

  • Rabkina I (2020) Analogical theory of mind: computational model and applications. PhD thesis, Northwestern University, https://search.proquest.com/openview/9b5e17f0c672eeed61afad5273bb39df/1?pq-origsite=gscholar &cbl=18750 &diss=y

  • Rabkina I, Forbus KD (2019) Analogical reasoning for intent recognition and action prediction in multi-agent systems. In: Proceedings of the 7th annual conference on advances in cognitive systems

  • Rajabi m, Gunaratne C, Mantzaris AV, et al (2020) On countering disinformation with caution: Effective inoculation strategies and others that backfire into community hyper-polarization. In: International conference on social computing, behavioral-cultural modeling and prediction and behavior representation in modeling and simulation, Springer, pp 130–139

  • Schlüter M, Baeza A, Dressler G, et al (2017) A framework for mapping and comparing behavioural theories in models of social–ecological systems. Ecol Econ 131:21–35. https://doi.org/10.1016/j.ecolecon.2016.08.008

    Article  Google Scholar 

  • Scholkopf B, Locatello F, Bauer S, et al (2021) Toward causal representation learning. Proc IEEE 109(5):612–634. https://doi.org/10.1109/JPROC.2021.3058954

    Article  Google Scholar 

  • Shum M, Kleiman-Weiner M, Littman ML, et al (2019) Theory of minds: Understanding behavior in groups through inverse planning. 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st innovative applications of artificial intelligence conference, IAAI 2019 and the 9th AAAI symposium on educational advances in artificial intelligence, EAAI 2019 pp 6163–6170. https://doi.org/10.1609/aaai.v33i01.33016163, https://arxiv.org/abs/arXiv:1901.06085

  • Skyrms B (2003) The stag hunt and the evolution of social structure. The Stag Hunt and the Evolution of Social Structure pp 1–149. https://doi.org/10.1017/CBO9781139165228

  • Sukthankar G, Geib C, Bui HH, et al (2014) Plan, activity, and intent recognition: theory and practice. Newnes

  • Vu TM, Probst C, Epstein JM, et al (2019) Toward inverse generative social science using multi-objective genetic programming. In: Proceedings of the genetic and evolutionary computation conference. ACM, Prague Czech Republic, pp 1356–1363, https://doi.org/10.1145/3321707.3321840

  • Winkle K, Caleb-Solly P, Turton A, et al (2020) Mutual shaping in the design of socially assistive robots: a case study on social robots for therapy. Int J Soc Robot 12(4):847–866. https://doi.org/10.1007/s12369-019-00536-9

    Article  Google Scholar 

  • Zhi-Xuan T, Mann J, Silver T, et al (2020) Online Bayesian Goal Inference for Boundedly Rational Planning Agents. In: Advances in neural information processing systems, vol 33. Curran Associates, Inc., pp 19,238–19,250

Download references

Acknowledgements

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0036. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lux Miranda.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miranda, L., Garibary, O.O. Approaching (super)human intent recognition in stag hunt with the Naïve Utility Calculus generative model. Comput Math Organ Theory 29, 434–447 (2023). https://doi.org/10.1007/s10588-022-09367-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10588-022-09367-y

Keywords

Navigation