Machine Learning Theory
Our ML Theory stream provides the mathematical foundations that underpin both classical and quantum learning. We study the statistical limits of learning algorithms, the geometry of loss landscapes, and the representational capacity of modern model families. This stream serves as the theoretical backbone of the club, ensuring that our quantum-focused work is grounded in rigorous learning-theoretic principles.
Key Topics
- PAC learning and VC dimension
- Rademacher and Gaussian complexity
- Generalization bounds for neural networks
- Implicit regularization and overparameterization
- Optimization landscapes and saddle-point methods
- Information-theoretic lower bounds
- Kernel methods and the reproducing kernel Hilbert space
Open Questions
- “How do double-descent phenomena in overparameterized models connect to quantum circuit depth?”
- “Can PAC-learning bounds be tightened for structured hypothesis classes arising in physics?”
- “What role does algorithmic stability play in generalization for stochastic gradient methods on non-convex objectives?”