This paper concerns the structure of learned representations in txt-guided generative models, focusing on score-based models. A key property of such models is that they can compose disparate concepts in a'disentangled'manner. This suggests these …
We consider prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. This semi-adversarial setting includes (at the extremes) the classical i.i.d. setting, when the unknown …
A common tool in the practice of Markov Chain Monte Carlo is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or intractable. A limited set of quantitative tools exist to assess the relative …
We propose to study the generalization error of a learned predictor h^ in terms of that of a surrogate (potentially randomized) predictor that is coupled to h^ and designed to trade empirical risk for control of generalization error. In the case …
The information-theoretic framework of Russo and J. Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm's output and the training sample. …
In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019). Our main contributions are significantly improved mutual …