Collective activity and autonomous characters: trust-based decision-making system

Collective activity and autonomous characters: trust-based decision-making system

Lucile Callebert Domitile Lourdeaux Jean-Paul Barthès 

Sorbonne universités, Université de technologie de Compiègne, CNRS, Heudiasyc UMR 7253, CS 60 319, 60 203 Compiègne, France

Corresponding Author Email:;;
30 April 2017
| Citation

When working in teams, people make mistakes. To train someone in a collaborative virtual environment to adapt to teammates that bahave non optimally, we propose (1) an augmentation of the ACTIVITY-Description Language as well as mechanisms of propagation of constraints that will facilitate agents’ reasoning; and (2) an agent model in which each agent is described through three dimensions (integrity, benevolence, abilities) corresponding to the MDS trust model. Besides each agent has different personal and collective goals and has beliefs about others’ integrity, benevolence and abilities. This agent model is associated to a decision-making system that allows agents to adopt human-like behaviors. In particular, agents take others into account and are able to reason on their beliefs about others both when choosing which goal (collective or individual) to focus on and when selecting a task. We conducted a preliminary evaluation in which participants evaluated the behaviors produced with our system.


multi-agents systems, decision-making, trust, collective activity

1. Introduction
2. Fondements théoriques
3. Travaux connexes
4. Description et traitement de l’activité collective
5. Moteur décisionnel
6. Évaluation préliminaire
7. Conclusion et perspectives

Ces travaux ont été menés dans le cadre d'une thèse financée par la Direction Générale de l'Armement (DGA) et le Labex MS2T, lui-même financé par le gouvernement français au travers du programme Investissements d’avenir géré par l'Agence Nationale de la Recherche (ANR – référence ANR-11-IDEX-0004-02).


Anderson C., Franks N. R. (2003). Teamwork in animals, robots, and humans. Advances in the Study of Behavior, vol. 33, p. 1–48.

Barot C. (2014). Scénarisation d’environnements virtuels. Thèse de doctorat – Université de technologie de Compiègne.

Callebert L., Lourdeaux D., Barthès J.-P. (2016a). Moteur décisionnel reposant sur un modèle de confiance pour des agents autonomes. In Rencontre des jeunes chercheurs en intelligence artificielle.

Callebert L., Lourdeaux D., Barthès J.-P. (2016b). A trust-based decision-making approach applied to agents in collaborative environments. In International conference on agents and artificial intelligence, p. 287–295.

Cao Y. U., Fukunaga A. S., Kahng A. (1997). Cooperative mobile robotics: Antecedents and directions. Autonomous robots, vol. 4, no 1, p. 7–27.

Castaldo S. (2002). Meanings of trust: a meta-analysis of trust definitions. In Proceedings of the european academy of management conference, stockholm, published online.

Castelfranchi C., Falcone R. (2010). Trust theory: A socio-cognitive and computational model (vol. 18). John Wiley & Sons.

Chaudhuri A., Sopher B., Strand P. (2002). Cooperation in social dilemmas, trust and reciprocity. Journal of Economic Psychology, vol. 23, no 2, p. 231–249.

Cohen P. R., Levesque H. J. (1990). Intention is choice with commitment. Artificial intelligence, vol. 42, no 2-3, p. 213–261.

Erdem F. (2003). Optimal trust and teamwork: From groupthink to teamthink. Work Study, vol. 52, no 5, p. 229–233.

Gambetta D. (2000). Can we trust trust. Trust: Making and breaking cooperative relations, vol. 13, p. 213–237.

Gerbaud S. (2008). Contribution à la formation en réalité virtuelle. Thèse de doctorat – INSA de Rennes.

Giampapa J. A., Sycara K. (2002). Team-oriented agent coordination in the RETSINA multiagent system. Rapport technique. Pittsburgh, PA, Robotics Institute.

Gill H., Boies K., Finegan J. E., McNally J. (2005). Antecedents of trust: Establishing a boundary condition for the relation between propensity to trust and intention to trust. Journal of business and psychology, vol. 19, no 3, p. 287–302.

Grosz B. J., Kraus S. (1996). Collaborative plans for complex group action. Artificial Intelligence, vol. 86, no 2, p. 269–357.

Herzig A., Lorini E., Hübner J. F., Vercouter L. (2010). A logic of trust and reputation. Logic Journal of IGPL, vol. 18, no 1, p. 214–244.

Hill Jr R. W., Gratch J., Marsella S., Rickel J., Swartout W. R., Traum D. R. (2003). Virtual humans in the mission rehearsal exercise system. KI, vol. 17, no 4, p. 5.

Hübner J. F., Sichman J. S., Boissier O. (2002). Spécification structurelle, fonctionnelle et déontique d’organisations dans les sma. In Jfsma, p. 205–216.

Hubner J. F., Sichman J. S., Boissier O. (2007). Developing organised multiagent systems using the MOISE+ model: Programming issues at the system and agent levels. International Journal of Agent-Oriented Software Engineering, vol. 1, no 3-4, p. 370–395.

Jarvenpaa S. L., Knoll K., Leidner D. E. (1998). Is anybody out there? Antecedents of trust in global virtual teams. Journal of management information systems, vol. 14, no 4, p. 29–64.

Jennings N. R. (1995). Controlling cooperative problem solving in industrial multi-agent systems using joint intentions. Artificial intelligence, vol. 75, no 2, p. 195–240.

Kerr N. L. (1983). Motivation losses in small groups: A social dilemma analysis. Journal of Personality and Social Psychology, vol. 45, no 4, p. 819.

Kuhn H. W. (1955). The Hungarian Method for the assignment problem. Naval Research Logistics Quarterly, vol. 2, p. 83–97.

Lorini E., Demolombe R. (2008). From binary trust to graded trust in information sources: a logical perspective. In Trust in agent societies, p. 205–225. Springer.

Marsella S. C., Pynadath D. V., Read S. J. (2004). PsychSim: Agent-based modeling of social interactions and influence. In Proceedings of the international conference on cognitive modeling, vol. 36, p. 243–248.

Marsh S., Briggs P. (2009). Examining trust, forgiveness and regret as computational concepts. In Computing with social trust, p. 9–43. Springer.

Mayer R. C., Davis J. H., Schoorman F. D. (1995). An integrative model of organizational trust. Academy of management review, vol. 20, no 3, p. 709–734.

McKnight D. H., Chervany N. L. (2001). Trust and distrust definitions: One bite at a time. In Trust in cyber-societies, p. 27–54. Springer.

Mishra A. K. (1996). Organizational responses to crisis. Trust in Organizations. Frontiers of theory and research, p. 261–287.

Palanski M. E., Kahai S. S., Yammarino F. J. (2011). Team virtues and performance: An examination of transparency, behavioral integrity, and trust. Journal of Business Ethics, vol. 99, no 2, p. 201–216.

Pinyol I., Sabater-Mir J. (2013). Computational trust and reputation models for open multiagent systems: a review. Artificial Intelligence Review, vol. 40, no 1, p. 1–25.

Pinyol I., Sabater-Mir J., Dellunde P., Paolucci M. (2012). Reputation-based decisions for logic-based cognitive agents. Autonomous Agents and Multi-Agent Systems, vol. 24, no 1, p. 175–216.

Pizzi D., Cavazza M. (2007). Affective storytelling based on characters’ feelings. In Proceedings of the aaai fall symposium on intelligent narrative technologies, p. 111–118.

Porteous J., Charles F., Cavazza M. (2013). Networking: using character relationships for interactive narrative generation. In Proceedings of the 2013 international conference on autonomous agents and multi-agent systems, p. 595–602.

Rousseau D. M., Sitkin S. B., Burt R. S., Camerer C. (1998). Not so different after all: Across-discipline view of trust. Academy of management review, vol. 23, no 3, p. 393–404.

Tambe M. (1997). Towards flexible teamwork. Journal of artificial intelligence research, p. 83–124.

Traum D., Swartout W., Marsella S., Gratch J. (2005). Fight, flight, or negotiate: Believable strategies for conversing under crisis. In Intelligent virtual agents, p. 52–64.

Wong P. S., Cheung S. O., Ho P. K. (2005). Contractor as trust initiator in construction partnering—prisoner’s dilemma perspective. Journal of Construction Engineering and Management, vol. 131, no 10, p. 1045–1053.