Despite remarkable advances in artificial intelligence, the fundamental principles underlying learning and intelligent systems have yet to be identified. What makes our world and its data inherently learnable? How do natural or artificial brains learn? Physicists are well positioned to address these questions. They seek fundamental understanding and construct effective models without being bound by the strictures of mathematical rigor nor the need for state-of-the-art engineering performance. This mindset, recognized by the 2024 Nobel Prize in Physics awarded to AI pioneers, is needed to uncover the fundamental principles of learning. Our group advances this research frontier.
Research Focus and Synergies
Professors Brice Ménard, Matthieu Wyart, Soledad Villar, Jared Kaplan, and their team members are pushing the boundaries of AI theory. They are interested in questions like: together with their research teams, are developing theoretical foundations for artificial intelligence. Their work addresses central questions in the physics of learning:
- How is learned information encoded in neural representations?
- Do neural networks exhibit universal properties? Can we construct a thermodynamic theory of learning?
- What determines how performance scales with model size and computational resources?
- What properties of data make it learnable?
These questions are essential for constructing a comprehensive, unifying theory of neural learning and computation. The answers will illuminate not only how artificial systems learn, but may also reveal fundamental principles that govern learning in biological brains.
The group is expanding significantly over the coming years with several new faculty appointments together with their graduate students and postdoctoral researchers. This growth reflects the recognition that physics-based approaches to understanding learning represent a vital frontier in scientific inquiry.
Research in the physics of learning is inherently interdisciplinary. Our group members collaborate extensively with colleagues across the departments of cognitive/neuroscience science, computer science, and applied mathematics & statistics.
Additional faculty members incorporating AI methods into their research programs include Alex Szalay, Ben Wandelt, Petar Maksimovic, Yi Li, and Tyrel McQueen.
Join Our Research Community
PhD Program in the Physics of Learning
This program, created in 2024, is aimed at preparing graduate students to become leaders in AI research in academia or industry. It offers advanced courses in Machine Learning and statistical physics. Students have the opportunity to work on semester-long research projects before focusing on a specific topic for the completion of the PhD.
The first graduate students working on the physics of learning started at Johns Hopkins in 2025. We are now recruiting the next cohort. If you are completing your undergraduate degree and are passionate about understanding the fundamental principles of learning and intelligence, we encourage you to apply. We welcome applications from candidates with backgrounds in physics, mathematics, computer science, neuroscience, or related fields who are eager to tackle fundamental questions about learning and intelligence.