The Statistical Learning Theory working group will meet on Wednesdays. We will have readings, presentations and discussions on topics including (but, not limited to) statistical learning theory, nonparametric estimation and inference, deep learning, functional data analysis, topological data analysis and algebraic statistics. The focus of the group is to read and discuss important papers in one particular topic of interest for a semester or two.
|Time||Wednesday 2:15 - 3:45 PM|
This semester we will focus on stochastic optimization. The papers we will focus on are categorized below.
Sampling and Gradient Flow:
- Francis Bach’s tutorial on gradient flows
- The Variational Formulation of the Fokker-Planck Equation
- Convergence of Langevin MCMC in KL-divergence
- Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
- Sampling Can Be Faster Than Optimization
- Stein Variational Gradient Descent as Gradient Flow
- SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
- Maximum Mean Discrepancy Gradient Flow
- A Non-Asymptotic Analysis for Stein Variational Gradient Descent
- Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory
Langevin Monte Carlo:
- Theoretical guarantees for approximate sampling from smooth and log-concave densities
- Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm
- High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm
- Analysis of Langevin Monte Carlo via convex optimization
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
Here is an overview of gradient flows by Filippo Santambrogio, introductory lectures on convex optimization by Yurii Nesterov and the more exhaustive lectures on convex optimization by Yurii Nesterov.
The webpage and resources for Fall 2019 can be found here.
The schedule is available on the STAG Google Calendar