Perspectives in random matrices and large networks

Perspectives in random matrices and large networks

Gilles Wainrib Romain Couillet 

Ecole Normale Supérieure Département d’Informatique 45 rue d’Ulm, 75005 Paris, France

CentraleSupélec 91192 Gif sur Yvette, France

Corresponding Author Email: 
gilles.wainrib@ens.fr
Page: 
351-376
|
DOI: 
https://doi.org/10.3166/TS.33.351-376
Received: 
26 March 2015
| |
Accepted: 
2 March 2016
| | Citation
Abstract: 

In this article, several research perspectives in random matrix theory applied to graph theory at large are discussed. Specific focus will be made on the spectrum analysis of the adja- cency or Laplacian matrices of large dimensional graphs for community detection in networks, of kernel random matrices for clustering in large datasets, along with applications to neural networks.

Keywords: 

random matrix theory, spectral clustering, neural networks, graphs, random graphs.

Extended Abstract
1. Introduction
2. Le Spectre d’un graphe
3. Graphes aléatoires
4. Méthodes à noyaux et classification
5. Réseaux de neurones
6. Conclusion
Remerciements
  References

Achlioptas D. (2003). Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of computer and System Sciences, vol. 66, no 4, p. 671–687.

Bai Z. D., Silverstein J. W. (1998). No eigenvalues outside the support of the limiting spectral distribution of large dimensional sample covariance matrices. The Annals of Probability, vol. 26, no 1, p. 316-345.

Baik J., Silverstein J. W. (2006). Eigenvalues of large sample covariance matrices of spiked population models. Journal of Multivariate Analysis, vol. 97, no 6, p. 1382-1408.

Benaych-Georges F., Nadakuditi R. R. (2012). The singular values and vectors of low rank per- turbations of large rectangular random matrices. Journal of Multivariate Analysis, vol. 111, p. 120–135.

Bender M. (2010). Edge scaling limits for a family of non-hermitian random matrix ensembles. Probability theory and related fields, vol. 147, no 1-2, p. 241–271.

Bianchi P., Najim J., Maida M., Debbah M. (2011). Performance of some eigen-based hypo- thesis tests for collaborative sensing. IEEE Transactions on Information Theory, vol. 57, no 4, p. 2400-2419.

Bickel P. J., Levina E. (2008). Regularized estimation of large covariance matrices. The Annals of Statistics, vol. 36, no 1, p. 199–227.

Bingham E., Mannila H. (2001). Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh acm sigkdd international conference on knowledge discovery and data mining, p. 245–250.

Cambria E., Huang G.-B., Kasun L. L. C., Zhou H., Vong C.-M., Lin J. et al. (2013). Extreme learning machines. IEEE Intelligent Systems, vol. 28, no 6, p. 30–59.

Couillet R., Debbah M. (2011). Random Matrix Methods for Wireless Communications (first éd.). New York, NY, USA, Cambridge University Press.

Couillet R., McKay M. (2014). Large dimensional analysis and optimization of robust shrinkage covariance matrix estimators. Journal of Multivariate Analysis, vol. 131, p. 99-120.

Couillet R., Pascal F., Silverstein J. W. (2015). The random matrix regime of Maronna’s M- estimator with elliptically distributed samples. Journal of Multivariate Analysis, vol. 139, p. 56–78.

Dasgupta S. (2000). Experiments with random projection. In Proceedings of the sixteenth conference on uncertainty in artificial intelligence, p. 143–151.

Del Molino L. C. G., Pakdaman K., Touboul J., Wainrib G. (2013). Synchronization in random balanced networks. Physical Review E, vol. 88, no 4, p. 042824.

Donoho D. L. (2006). Compressed sensing. Information Theory, IEEE Transactions on, vol. 52, no 4, p. 1289–1306.

El Karoui N. (2010). The spectrum of kernel random matrices. The Annals of Statistics, vol. 38, no 1, p. 1–50.

Fernando C., Sojakka S. (2003). Pattern recognition in a bucket. In Advances in artificial life, p. 588–597. Springer.

Galtier M., Wainrib G. (2014). A local echo state property through the largest lyapunov ex- ponent. arXiv preprint arXiv:1402.1619.

Girko V. L. (1987). Introduction to general statistical analysis. Theory of Probability & Its Applications, vol. 32, no 2, p. 229–242.

Graves A., Mohamed A.-R., Hinton G. (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, p. 6645–6649.

Guédon O., Vershynin R. (2014). Community detection in sparse networks via grothendieck’s inequality. arXiv preprint arXiv:1411.4686.

Hornik K., Stinchcombe M., White H. (1989). Multilayer feedforward networks are universal approximators. Neural networks, vol. 2, no 5, p. 359–366.

Huang G.-B. (2014). An insight into extreme learning machines: random neurons, random features and kernels. Cognitive Computation, vol. 6, no 3, p. 376–390.

Huang G.-B., Chen L., Siew C.-K. (2006). Universal approximation using incremental construc- tive feedforward networks with random hidden nodes. Neural Networks, IEEE Transactions on, vol. 17, no 4, p. 879–892.

Huang G.-B., Zhu Q.-Y., Siew C.-K. (2004). Extreme learning machine: a new learning scheme of feedforward neural networks. In Neural networks, 2004. proceedings. 2004 ieee interna- tional joint conference on, vol. 2, p. 985–990.

Huang G.-B., Zhu Q.-Y., Siew C.-K. (2006). Extreme learning machine: theory and applica- tions. Neurocomputing, vol. 70, no 1, p. 489–501.

Jaeger H., Haas H. (2004). Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, vol. 304, no 5667, p. 78–80.

Johnson W. B., Lindenstrauss J. (1984). Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, vol. 26, no 189-206, p. 1.

Johnstone I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Annals of Statistics, vol. 99, no 2, p. 295-327.

Krzakala F., Moore C., Mossel E., Neeman J., Sly A., Zdeborová L. et al. (2013). Spectral re- demption in clustering sparse networks. Proceedings of the National Academy of Sciences, vol. 110, no 52, p. 20935–20940.

LukošEvicˇIus M., Jaeger H.  (2009).  Reservoir computing approaches to recurrent neural net- work training. Computer Science Review, vol. 3, no 3, p. 127–149.

Maass W.,  Natschläger T.,  Markram H.  (2002).  Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, vol. 14, no 11, p. 2531–2560.

Martens J., Sutskever I. (2011). Learning recurrent neural networks with hessian-free optimi- zation. In Proceedings of the 28th international conference on machine learning (icml-11), p. 1033–1040.

Massar M., Massar S. (2013). Mean-field theory of echo state networks. Physical Review E, vol. 87, no 4, p. 042809.

Mestre X. (2008, novembre). Improved estimation of eigenvalues of covariance matrices and their associated subspaces using their sample estimates. IEEE Transactions on Information Theory, vol. 54, no 11, p. 5113-5129.

Nadakuditi R. R., Newman M. E. J. (2012). Graph spectra and the detectability of community structure in networks. Physical review letters, vol. 108, no 18, p. 188701.

Nadakuditi R. R., Newman M. E. J. (2013). Spectra of random graphs with arbitrary expected degrees. Physical Review E, vol. 87, no 1, p. 012803.

Ng A. Y., Jordan M., Weiss Y. (2001). On spectral clustering: Analysis and an algorithm. Proceedings of Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, vol. 14, p. 849–856.

Ozturk M. C., Xu D., Príncipe J. C. (2007). Analysis and design of echo state networks. Neural Computation, vol. 19, no 1, p. 111–138.

Paquot Y., Duport F., Smerieri A., Dambre J., Schrauwen B., Haelterman M. et al. (2012). Optoelectronic reservoir computing. Scientific reports, vol. 2.

Rahimi A., Recht B. (2007). Random features for large-scale kernel machines. In Advances in neural information processing systems, p. 1177–1184.

Rajan K., Abbott L. (2006). Eigenvalue spectra of random matrices for neural networks. Phy- sical review letters, vol. 97, no 18, p. 188104.

Rajan K., Abbott L., Sompolinsky H. (2010). Stimulus-dependent suppression of chaos in recurrent neural networks. Physical Review E, vol. 82, no 1, p. 011903.

Rider B., Sinclair C. D. et al. (2014). Extremal laws for the real ginibre ensemble. The Annals of Applied Probability, vol. 24, no 4, p. 1621–1651.

Rosenblatt F. (1958). The perceptron: a probabilistic model for information storage and orga- nization in the brain. Psychological review, vol. 65, no 6, p. 386.

Rumelhart D. E., Hinton G. E., Williams R. J. (1985). Learning internal representations by error propagation. Rapport technique. DTIC Document.

Sompolinsky H., Crisanti A., Sommers H. (1988). Chaos in random neural networks. Physical Review Letters, vol. 61, no 3, p. 259.

Song Q., Feng Z.  (2010).  Effects of connectivity structure of complex echo state network   on its prediction performance for nonlinear time series. Neurocomputing, vol. 73, no  10,  p. 2177–2185.

Sutskever I., Martens J., Hinton G. E. (2011). Generating text with recurrent neural networks. In Proceedings of the 28th international conference on machine learning (icml-11), p. 1017– 1024.

Tao T. (2013). Outliers in the spectrum of iid matrices with bounded rank perturbations. Probability Theory and Related Fields, vol. 155, no 1-2, p. 231–263.

Toyoizumi T., Abbott L. (2011). Beyond the edge of chaos: Amplification and temporal in- tegration by recurrent networks in the chaotic regime.  Physical Review E, vol. 84, no  5,  p. 051908.

Tulino A. M., Verdú S. (2004). Random matrix theory and wireless communications. Founda- tions and Trends in Communications and Information Theory, vol. 1, no 1.

Vallet P., Loubaton P., Mestre X. (2012). Improved subspace estimation for multivariate obser- vations of high dimension: the deterministic signals case. IEEE Transactions on Information Theory, vol. 58, no 2, p. 1043–1068.

Vinogradova J., Couillet R., Hachem W. (2014). Estimation of Toeplitz covariance matrices  in large dimensional regime with application to source detection. IEEE Transactions on Signal Processing. Consulté sur http://arXiv.org/pdf/1403.1243

Wainrib G., Del Molino L. C. G. (2013). Optimal system size for complex dynamics in random neural networks near criticality. Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 23, no 4, p. 043134.

Wainrib G., Touboul J. (2013). Topological and dynamical complexity of random neural net- works. Physical review letters, vol. 110, no 11, p. 118101.

Williams C. K. (1998).  Computation with infinite neural networks.  Neural Computation,  vol. 10, no 5, p. 1203–1216.

Zhang T., Cheng X., Singer A. (2014). Marchenko-Pastur Law for Tyler’s and Maronna’s M-estimators. http://arxiv.org/abs/1401.3424.

Zhang X., Nadakuditi R. R., Newman M. E. J. (2014). Spectra of random graphs with commu- nity structure and arbitrary degrees. Physical Review E, vol. 89, no 4, p. 042816.

Zhao Y., Levina E., Zhu J. et al. (2012). Consistency of community detection in networks under degree-corrected stochastic block models.   The Annals of Statistics, vol. 40, no  4,   p. 2266–2292.