published

Concept algebra for (score-based) text-controlled generative models

This paper concerns the structure of learned representations in txt-guided generative models, focusing on score-based models. A key property of such models is that they can compose disparate concepts in a'disentangled'manner. This suggests these …

Relaxing the i.i.d. assumption: Adaptively minimax optimal regret via root-entropic regularization

We consider prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. This semi-adversarial setting includes (at the extremes) the classical i.i.d. setting, when the unknown …

Approximations of Geometrically Ergodic Reversible Markov Chains

A common tool in the practice of Markov Chain Monte Carlo is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or intractable. A limited set of quantitative tools exist to assess the relative …

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

We propose to study the generalization error of a learned predictor h^ in terms of that of a surrogate (potentially randomized) predictor that is coupled to h^ and designed to trade empirical risk for control of generalization error. In the case …

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

The information-theoretic framework of Russo and J. Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm's output and the training sample. …

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates

In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019). Our main contributions are significantly improved mutual …