All Abstracts
Stephane Boucheron: A poor man's
Wilks phenomenon
Gilles Blanchard: Resampling-based
confidence regions in high dimension from a non-asymptotic point of view
Albert Cohen: Matching vs. basis
pursuit for approximation and learning: a comparison
Ingrid Daubechies: Convergence
results and counterexamples for AdABoost and related algorithms
Nira Dyn: Two
algorithms for adaptive approximation of bivariate functions by
piecewise linear polynomials on triangulations
Maya Gupta: Functional Bregman
Divergence, Bayesian Estimation of Distributions, and Completely Lazy
Classifiers
Lee Jones: Finite sample minimax
estimation, fusion in machine learning, and overcoming the curse of
dimensionality
Dominique Picard: A
'Frame-work' in Learning Theory
Vladimir Koltchinskii: Sparse
Recovery Problems in Learning Theory
Tomaso Poggio: Learning: neuroscience
and engineering applications
Christoph Schwab: Elliptic PDEs with
random field input -- numerical analysis of forward solvers and of goal
oriented input learning
Steve Smale: Vision and learning
Ingo Steinwart: Approximation
Theoretical Questions for Support Vector Machines
Vladimir Temlyakov: Universality
and Lebesgue inequalities in approximation and estimation
Alessandro Verri: Regularization
Algorithms for Learning
Patrick Wolfe: The Nystrom Extension
and
Spectral Methods in Learning: A New Algorithm for Low-Rank
Approximation of Quadratic Forms
Ding-Xuan Zhou: Learnability of
Gaussians with Flexible Variances