A Note on How to Remove the $\ln\ln T$ Term from the Squint Bound
2026-04-29 • Machine Learning
Machine Learning
AI summaryⓘ
The authors built on previous work by Orabona and Pál (2016), who introduced a method called shifted KT potentials to improve learning algorithms by removing a troublesome small logarithmic term in performance bounds. They show that this method is actually equivalent to changing the starting assumptions (the prior) in the Krichevsky–Trofimov algorithm, which is a way of processing expert advice. Then, they apply the same idea to another algorithm named Squint, improving its theoretical guarantees by also removing that small logarithmic term. Essentially, the authors provide a simpler understanding and broader use of a technique to make predictions more precise without extra complexity.
shifted KT potentialsKrichevsky–Trofimov algorithmprior in Bayesian methodsparameter-free learninglearning with expert adviceSquint algorithmperformance boundslogarithmic factorsdata-independent boundonline learning
Authors
Francesco Orabona
Abstract
In Orabona and Pál [2016], we introduced the shifted KT potentials, to remove the $\ln \ln T$ factor in the parameter-free learning with expert bound. In this short technical note, I show that this is equivalent to changing the prior in the Krichevsky--Trofimov algorithm. Then, I show how to use the same idea to remove the $\ln \ln T$ factor in the data-independent bound for the Squint algorithm.