Collective ethics in multiagent systems

Collective ethics in multiagent systems

Nicolas Cointe Grégory Bonnet Olivier Boissier 

Université de Lyon, MINES Saint-Etienne, CNRS, Laboratoire Hubert Curien, UMR 551 Saint-Étienne, 42023 France

Équipe Modèle Agent Décision GREYC, département Intelligence Artificielle et Algorithmique CNRS UMR 6072 F-14032 Normandie Université, Caen, France

Corresponding Author Email: 
nicolas.cointe@mines-stetienne.fr; olivier.boissier@mines-stetienne.fr; gregory.bonnet@unicaen.fr
Page: 
71-96
|
DOI: 
https://doi.org/10.3166/RIA.31.71-96
Received: 
| |
Accepted: 
| | Citation

OPEN ACCESS

Abstract: 

The increasingly current and future presence of multi-agent systems in various areas leads us to ask many questions about the ethics of their decisions. Indeed, decisions that these systems must make sometimes require consideration of ethical concepts, both as an individual entity and a member of an organization. This problem, often considered as a design issue from an agent centered perspective, is addressed in this paper from a collective point of view and the result of reasonning. This position paper discusses the concepts to be modeled within an agent, a set of issues and a state of the art for addressing this question.

Keywords: 

multi-agents systems, collective ethics, dilemmas.

1. Introduction
2. Cadre philosophique et technologique
3. Cadre d’analyse
4. Éthiques individuelles en interaction
5. Éthiques individuelles et éthiques collectives
6. Conclusion et perspectives
Remerciements

Ce travail a été réalisé dans le cadre du projet EthicAa 3 (référence ANR-13-CORD-0006).

  References

Aldewereld H., Dignum V., Tan Y. hua. (2015). Handbook of ethics, values, and technological design. In, chap. Design for values in software development. Springer-Verlag.

Alexander L., Moore M. (2015). Deontological ethics. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Spring éd..

Amgoud L., Prade H., Belabbes S. (2005). Towards a formal framework for the search of a consensus between autonomous agents. In 4th international joint conference on autonomous agents and multiagent systems, p. 537-543.

Anderson M., Anderson S. (2015). Toward ensuring ethical behavior from autonomous systems: a case-supported principle-based paradigm. Industrial Robot: An International Journal, vol. 42, no 4, p. 324–331.

Arkin R. (2009). Governing lethal behavior in autonomous robots. CRC Press.

Arkoudas K., Bringsjord S., Bello P. (2005). Toward ethical robots via mechanized deontic logic. In Aaai fall symposium on machine ethics, p. 17–23.

Axelrod R. M. (2006). The evolution of cooperation. Basic books.

Banzhaff III J. (1964). Weighted voting doesn’t work: A mathematical analysis. Rutgers University Law Review, vol. 19.

Beavers A. F. (2011). 21 moral machines and the threat of ethical nihilism. Robot ethics: The ethical and social implications of robotics, p. 333–344.

Bergman R. (2004). Identity as motivation: Toward a theory of the moral self. Moral development, self, and identity, vol. 2, p. 21–46.

Berreby F., Bourgne G., Ganascia J.-G. (2015). Modelling moral reasoning and ethical responsibility with logic programming. In 20th international conference on logic for programming, artificial intelligence, and reasoning, p. 532–548.

Brams S., Taylor A. (1994). Fair division: From cake-cutting to dispute resolution. Cambridge University Press.

Bringsjord S., Ghosh R., Payne-Joyce J. (2016). Deontic counteridenticals. Agents (EDIA) 2016 , p. 40–45.

CERNA. (2014). Éthique de la recherche en robotique. Rapport technique. Commission de réflexion sur l’Éthique de la Recherche en science et technologies du Numérique d’Allistene.

Chen Y., Chong S., Kash I., Efi Arazi T., Vadhan S. (2016). Truthful mechanisms for agents that value privacy. ACM Transactions on Economics and Computation, vol. 4, no 3.

Coelho H., Trigo P., Costa A. D. R. (2010). On the operationality of moral-sense decision making. In 2nd brazilian workshop on social simulation, p. 15–20.

Cointe N., Bonnet G., Boissier O. (2016a). Ethical judgment of agents’ behaviors in multi-agent systems. In 15th international conference on autonomous agents & multiagent systems, p. 1106–1114.

Cointe N., Bonnet G., Boissier O. (2016b). Multi-agent based ethical asset management. In 1st workshop on ethics in the design of intelligent agents, p. 52–57.

Coleman K. G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and Information Technology, vol. 3, no 4, p. 247–265.

Costa A. D. R. (2016). Moral systems of agent societies: Some elements for their analysis and design. Agents (EDIA) 2016 , p. 34–39.

Damasio A. (2008). Descartes’ error: Emotion, reason and the human brain. Random House.

Ethical judgment. (2015, August). Free Online Psychology Dictionary.

Ferber J. (1995). Les systèmes multi-agents : Vers une intelligence collective. Paris, Inter Editions.

Foot P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, p. 5–15.

Friedman B., Kahn P., Borning A. (2002). Value sensitive design: Theory and methods. Rapport technique. University of Washington.

Ganascia J.-G. (2007a). Ethical system formalization using non-monotonic logics. In 29th annual conference of the cognitive science society, p. 1013–1018.

Ganascia J.-G. (2007b). Modelling ethical rules of lying with Answer Set Programming. Ethics and information technology, vol. 9, no 1, p. 39–47.

Gert B. (2015). The definition of morality. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Fall éd..

Gigerenzer G. (2010). Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in Cognitive Science, vol. 2, no 3, p. 528-554.

Greene J., Haidt J. (2002). How (and where) does moral judgment work? Trends in cognitive sciences, vol. 6, no 12, p. 517–523.

Hursthouse R. (2013). Virtue ethics. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Fall éd..

Johnson R. (2014). Kant’s moral philosophy. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Summer éd..

Kim K.-J., Lipson H. (2009). Towards a theory of mind in simulated robots. In 11th annual conference companion on genetic and evolutionary computation conference, p. 2071–2076.

Kohlberg L., Hersh R. H. (1977). Moral development: A review of the theory. Theory into practice, vol. 16, no 2, p. 53–59.

Mao W., Gratch J. (2013). Modeling social causality and responsibility judgment in multi-agent interactions. In 23rd international joint conference on artificial intelligence, p. 3166–3170.

McConnell T. (2014). Moral dilemmas. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Fall éd..

McDermott D. (2008). Why ethics is a high hurdle for AI. In North american conference on computing and philosophy.

McIntyre A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Winter éd..

McLaren B. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, vol. 21, no 4, p. 29–37.

Moor J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE intelligent systems, vol. 21, no 4, p. 18–21.

Nowak A., Radzik T. (1994). A solidarity value for n-person transferable utility games. International Journal of Game Theory, vol. 23, p. 43-48.

Pitt J., Busquets D., Riveret R. (2015). The pursuit of computational justice in open systems. AI & SOCIETY, vol. 30, no 3, p. 359–378.

Platon. (1966). La république (G. Leroux, Trad.). Garnier-Flammarion Paris.

Rao A., Georgeff M. (1995). BDI agents: From theory to practice. In 1st international conference on multiagent systems, p. 312–319.

Ricoeur P. (1995). Oneself as another. University of Chicago Press.

Rokeach M. (1974). Change and stability in american value systems, 1968-1971. Public Opinion Quarterly, vol. 38, no 2, p. 222–238.

Russell S., Dewey D., Tegmar M., Aguirre A., Brynjolfsson E., Calo R. et al. (2015). Research priorities for robust and beneficial artificial intelligence. (avaible on futureoflife.org/data/documents/)

Saint-Cyr F. D. de, Herzig A., Lang J., Marquis P. (2014, may). Panorama de l'intelligence artificielle - ses bases méthodologiques, ses développements. In, vol. 1 représentation des connaissances et formalisation des raisonnements, chap. Raisonnement sur l’action et le changement. Cépaduès.

Saptawijaya A., Pereira L. M. (2014). Towards modeling morality computationally with logic programming. In Practical aspects of declarative languages, p. 104–119.

Schwartz S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. Advances in experimental social psychology, vol. 25, p. 1–65.

Schwartz S. H. (2006). Basic human values: Theory, measurement, and applications. Revue française de sociologie, vol. 47, no 4, p. 249–288.

Schwartz S. H., Tamari M., Schwab D. (2007). Ethical investing from a jewishperspective. Business and Society Review, vol. 112, no 1, p. 137–161.

Shapley L. (1953). A value for n-person games. Princeton University Press.

Timmons M. (2012). Moral theory: an introduction. Rowman & Littlefield Publishers.

Treviño L. K.,Weaver G. R., Reynolds S. J. (2006). Behavioral ethics in organizations: A review. Journal of management, vol. 32, no 6, p. 951–990.

Walter S. (2015). Consequentialism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Winter éd..

Yang C. (1997). A family of values for n-person cooperative transferable utility games: An extension to the shapley value. Rapport technique. University of New-York Buffalo.