Matthieu Wyart

Matthieu Wyart

W. H. Miller Professor

Contact Information

Research Interests: Wyart’s research encompassed field such as the architecture of allosteric materials, granular and suspension flows, the glass and the yielding transitions. More recently one central focus has been deep learning, in particular data structure and generative models.

Education: PhD, SPEC, CEA Saclay, Paris

Matthieu Wyart studied physics, mathematics and economics at the Ecole Polytechnique in Paris where he obtained in 2001 his degree in physics   and, the following year, the Diploma of Advanced Studies in Theoretical Physics, with highest Honors at the Ecole Normale Supérieure, Paris. In 2006 he obtained a doctoral degree in Theoretical Physics and Finance at the SPEC, CEA Saclay, Paris with a thesis on electronic markets. He then moved to the United States, to Harvard, Janelia Farm, and Princeton before joining in 2010 New York University as Assistant Professor, where he was promoted Associate professor in 2014. In July 2015, he was appointed Associate Professor of Theoretical Physics in the School of Basic Sciences at EPFL, and promoted to full professor in 2023. He joined JHU in November 2024. 

 

 

Physics has been successful in explaining how disorder can lead to new emergent phenomena, including the localisation of waves, the pinning of a driven elastic manifold, the spin- glass transition or the jamming of granular materials.  Yet, we lack a general description of how the microscopic kinetic rule chosen to explore the complex energy landscapes in such systems affects their properties. It requires a detailed knowledge of the landscape geometry, which is currently missing. I study this question in different contexts, including the glass transition whereby a liquid becomes an amorphous solid under cooling, and the nucleation of ruptures or earthquakes, which is affected by both  disorder and positive  feedback mechanisms such as velocity weakening. 

My main current research focus is the theory of deep learning.   Deep learning algorithms are responsible for a revolution in AI, yet why they work is not understood, leading to challenges both in improving these methods and in interpreting their results.  Specifically, training deep nets corresponds to a descent in a `loss' landscape similar to complex energy landscapes found in physics. Why is the performance of deep learning greatly improved by adding stochasticity to the training procedure? More generally, why can deep learning perform so well on complex tasks with very limited data, and how does it depend on the symmetry and invariance of the task?  We study in particular how the hierarchical and combinatorial structure of data affects their learnability, in data such  as images or text. This research is thus interdisciplinary and connects statistical physics to computer science and linguistic.