Quelle transparence pour les algorithmes d’apprentissage machine ?

Quelle transparence pour les algorithmes d’apprentissage machine ?

Maël Pégny Issam Ibnouhsein  

Université de Paris 1 Panthéon-Sorbonne, IHPST, 13, rue du Four, 75006 Paris

Quantmetry, 52 rue d’Anjou, 75008 Paris, France

Corresponding Author Email: 
maelpegny@gmail.com; iibnouhsein@quantmetry.com
Page: 
447-478
|
DOI: 
https://doi.org/10.3166/ria.32.447-478
Received: 
| |
Accepted: 
| | Citation

OPEN ACCESS

Abstract: 

Recently, the concept of “algorithmic transparency" has become of primary importance in the public and scientific debates. In the light of the proliferation of uses of the term “transparency", we distinguish two families of fundamental uses of the concept: a descriptive family relating to intrinsic epistemic properties of programs, the first of which are intelligibility and explicability, and a prescriptive family that concerns the normative properties of their uses, the first of which are loyalty and fairness. Because one needs to understand an algorithm in order to explain it and carry out its audit, intelligibility is logically first in the philosophical study of transparency. In order to better determine the challenges of intelligibility in the public use of algorithms, we introduce a distinction between the intelligibility of the procedure and the intelligibility of outputs. Finally, we apply this distinction to the case of machine learning.  

Keywords: 

transparency, intelligibility, machine learning

1. Introduction
2. La transparence des algorithmes
3. Problèmes de l’intelligibilité
4. L’intelligibilité de l’AM
5. Conclusion
Remerciements
  References

Abdohalli, Benoush, Nasraoui, Olfa. (2016). Explainable Restricted Boltzmann Machines for Collaborative Filtering. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Assemblée Nationale et Sénat. (2016). Loi 2016-1321 du 7 Octobre 2016 pour une République numérique. Journal Officiel de la République Française, vol. 0235. 

Breiman L. (2001, octobre). Random forests. Machine Learning, vol. 45, no 1, p. 5–32. 

Caldini C. (s. d.). Google est-il antisémite? Consulté sur https://www.francetvinfo.fr/societe/ justice/google-est-il-antisemite_90113.html 

CERNA. (2017). Éthique de la recherche en apprentissage machine. Rapport technique. 

Condry N. (2016). Meaningful Models: Utilizing Conceptual Structure to Improve Machine Learning Interpretability. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Dhurandhar, Amit, Iyengar, Vijay, Luss, Ronny, Shanmugam, Karthikeyan. (2017). A Formal Framework to Characterize Interpretability of Procedures. In Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning. Doshi-Velez F., Kim B. (2017, février). Towards A Rigorous Science of Interpretable Machine 

Learning. ArXiv e-prints. Egele, Manuel, Scholte, Theodoor, Kirda, Engin and Kruegel, Christopher. (2012). A survey on automated dynamic malware-analysis techniques and tools. ACM. Comput. Surv., vol. 44, no 2. 

Executive Office of the President. (2014). Big Data: Seizing Opportunities, Preserving Values. Rapport technique. Transparence des algorithmes d’apprentissage machine 477 

Goodman, Bryce, Flaxman, Seth. (2016). EU regulations on algorithmic decision-making and a "right to explanation". In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Guestrin M. T. R. M. . S. S. . C. (2016). Introduction to Local Interpretable Model-Agnostic Explanations (LIME). Consulté sur https://www.oreilly.com/learning/introduction-to-local -interpretable-model-agnostic-explanations-lime 

Gunning D. (s. d.). Explainable Artificial Intelligence (XAI). Consulté sur https://www.darpa .mil/program/explainable-artificial-intelligence 

Hara, Satoshi, Hayashi, Kohei. (2016). Making Tree Ensembles Interpretable. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Hoffbeck J. P., Landgrebe D. A. (1996, juillet). Covariance matrix estimation and classification with limited training data. IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no 7, p. 763– 767. 

Hornik K., Stinchcombe M. B., White H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, vol. 2, p. 359-366.            

INRIA. (2017, Février). TransAlgo : évaluer la responsabilité et la transparence des systèmes algorithmiques. Consulté sur https://www.inria.fr/actualite/actualites-inria/transalgo 

Kendall A., Gal Y. (2017, mars). What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? ArXiv e-prints. 

KnightW. (2017). The Dark Secret at the Heart of IA. The MIT Technological Review, vol. 120, no 3. Krause, Josua, Perer, Adam, Bertini, Enrico. (2016). Using Visual Analytics to Interpret Predictive Machine Learning Models. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Lee J., Bahri Y., Novak R., Schoenholz S. S., Pennington J., Sohl-Dickstein J. (2017). Deep Neural Networks as Gaussian Processes. ArXiv e-prints. 

Lipton, Zachary C. (2016). The Mythos of Interpretability. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. 

Maaten L. van der, Hinton G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, vol. 9, p. 2579–2605. 

Parlement Européen. (2016). Règlement Général sur la Protection des Données. Règlement UE 2016/679 du Parlement Européen et du Conseil du 27 Avril 2016. 

Public Affairs N. T. S. B. O. of. (2017). Driver Errors, Overreliance on Automation, Lack of Safeguards, Led to Fatal Tesla Crash. Consulté sur https://www.ntsb.gov/news/press -releases/Pages/PR20170912.aspx 

Ribeiro M. T., Singh S., Guestrin C. (2016). "why should i trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, p. 1135–1144. New York, NY, USA, ACM. 

Seiller T. (s. d.). Why Complexity Theorists Should Care About Philosophy. In ANRDFG "Beyond Logic" Conference. Cerisy-la-Salle. Consulté sur https://www.seiller.org/ documents/why-cerisy.pdf 478 RIA. Volume 32 – no 4/2018 

Senge R., Bösner S., Dembczynski K., Haasenritter J., Hirsch O., Donner-Banzhoff N. et al. (2014). Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. In Information sciences, vol. 255, p. 16–29. 

Solon B., Hardt M. (s. d.). Fairness in Machine Learning. NIPS 2017 Tutorial. Consulté sur http://mrtz.org/nips17/#/ 

Vincent J. (2018). Google ‘fixed’ its racist algorithm by removing gorillas from its imagelabeling tech. Consulté sur https://www.theverge.com/2018/1/12/16882408/google-racist -gorillas-photo-recognition-algorithm-ai 

Wang H., Yeung D.-Y. (2016, avril). Towards Bayesian Deep Learning: A Survey. ArXiv e-prints. 

Wehenkel L. (1996). On uncertainty measures used for decision tree induction. In Ipmu-96, information processing and management of uncertainty in knowledge-based systems, p. 6. 

Weller A. (2017). Challenges for Transparency. In Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning.