Bridge: New challenge for artificial intelligence

Bridge: New challenge for artificial intelligence

Véronique Ventos Olivier Teytaud 

LRI, INRIA, Univ Paris-Sud, CNRS, Université Paris-Saclay, France

Corresponding Author Email:,
| |
| | Citation



The game of bridge is very complex both for humans and for computer bridge programs. While the AI Go AlphaGo recently reached the level of grandmaster go, it is not the case for bridge where robots are still far from the best human players. The main difference between the two games is that bridge is a partially observable game. An expert program on bridge could therefore perform tasks that are different from and complementary to those handled by AlphaGo. The first part of this article is devoted to the presentation of various aspects of the bridge and some thoughts on the numerous challenges inherent to them together with a state of art of the computer bridge community. In the second part, we present our work related to the adaptation to bridge of a recent method using for boosting game AIs by seeking a random seed better than the others on a particular game. The AI bridge Wbridge5 developed by Yves Costel has been boosted with the best seed found on the outcome of these experiments and won the world championship bridge robots in September 2016.


computer bridge, machine learning, monte-carlo, boosting AI

1. Introduction
2. Le bridge
3. Bridge informatisé
4. Challenges pour les programmes de bridge
5. Boosting d’une IA de bridge
6. Perspectives et travaux en cours
7. Conclusion

Nous tenons à remercier Yves Costel qui a réalisé les expérimentations et avec qui nous avons beaucoup de plaisir à collaborer. Un grand merci aussi aux bridgeurs Jean-Baptiste Fantun, Jean-Christophe Quantin et Jean-Pierre Desmoulins pour toutes les discussions constructives que nous avons eues sur ce sujet. Enfin, cet article est dédié à Daniel Kayser pionnier français de l’intelligence artificielle qui fut mon ami, mon directeur de thèse et dont l’esprit libre restera à jamais gravé dans nos mémoires.


Amit A., Markovitch S. (2006). Learning to bid in bridge. Machine Learning, vol. 63, no 3, p. 287–327.

Berlekamp E. R. (1963). Program for double-dummy bridge problems: a new strategy for mechanical game playing. Journal of the ACM (JACM), vol. 10, no 3, p. 357–364.

Cazenave T., Liu J., Teytaud F., Teytaud O. (2016). Learning opening books in partially observable games: using random seeds in phantom go. arXiv preprint arXiv:1607.02431.

Cazenave T., Liu J., Teytaud O. (2015, Aug). The rectangular seeds of domineering. In 2015 ieee conference on computational intelligence and games (cig), p. 530-531.

Coulom R. (2006). Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, p. 72–83.

DeLooze L. L., Downey J. (2007). Bridge bidding with imperfect information. In Computational intelligence and games, 2007. cig 2007. ieee symposium on, p. 368–373.

Gelly S., Kocsis L., Schoenauer M., Sebag M., Silver D., Szepesvári C. et al. (2012). The grand challenge of computer go: Monte carlo tree search and extensions. Communications of the ACM, vol. 55, no 3, p. 106–113.

Ginsberg M. (1996). Partition search. Proceedings Of The National Conference On Artificial Intelligence.

Ginsberg M. (2001). Gib: Imperfect information in a computationally challenging game. Journal of Artificial Intelligence Research, vol. 14, p. 303–358.

Heinrich J., Silver D. (2016). Deep reinforcement learning from self-play in imperfectinformation games. arXiv preprint arXiv:1603.01121.

Ho C.-Y., Lin H.-T. (2015). Contract bridge bidding by learning. In Proceedings of the workshop on computer poker and imperfect information at the twenty-ninth aaai conference on artificial intelligence.

Liu J., Teytaud O., Cazenave T. (2016). Fast seed-learning algorithms for games. In International conference on computers and games, p. 58–70.

Metropolis N., Ulam S. (1949). The monte carlo method. Journal of the American statistical association, vol. 44, no 247, p. 335–341.

Moravˇcík M., Schmid M., Burch N., Lis`y V., Morrill D., Bard N. et al. (2017). Deepstack: Expert-level artificial intelligence in no-limit poker. arXiv preprint arXiv:1701.01724.

Munos S. G. W., Teytaud O. (2006). Modification of uct with patterns in monte-carlo go. Technical Report RR-6062, vol. 32, p. 30–56.

Nitti D., Belle V., De Raedt L. (2015). Planning in discrete and continuous markov decision processes by probabilistic programming. In Joint european conference on machine learning and knowledge discovery in databases, p. 327–342.

Paul M. (2010). Bethe. The state of automated bridge play.

Pepels T., Cazenave T., Winands M. H. (2015). Sequential halving for partially observable games. In Workshop on computer games, p. 16–29.

Schaeffer J., Burch N., Björnsson Y., Kishimoto A., Müller M., Lake R. et al. (2007). Checkers is solved. science, vol. 317, no 5844, p. 1518–1522.

Silver D., Huang A., Maddison C. J., Guez A., Sifre L., Van Den Driessche G. et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, vol. 529, no 7587, p. 484–489.

Smith S. J., Nau D. S., Throop T. A. (1996). Total-order multi-agent task-network planning for contract bridge. In Aaai/iaai, vol. 1, p. 108–113.

St-Pierre D. L., Teytaud O. (2014). The nash and the bandit approaches for adversarial portfolxos. In Computational intelligence and games (cig), 2014 ieee conference on, p. 1–7.

Sutton R. S., Barto A. G. (1998). Reinforcement learning: An introduction (vol. 1) no 1. MIT press Cambridge.

Wasserman A. I. (1970). Realization of a skillful bridge bidding program. , p. 433–444.

Yegnanarayana B., Khemani D., Sarkar M. (1996). Neural networks for contract bridge bidding. Sadhana, vol. 21, no 3, p. 395–413.

Yeh C.-K., Lin H.-T. (2016). Automatic bridge bidding using deep reinforcement learning. arXiv preprint arXiv:1607.03290.