Abstract: Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this talk, I will propose a new stochastic gradient descent procedure that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a non-stochastic coin and we propose an optimal strategy based on a generalization of Kelly betting. Moreover, this reduction can be also used for other machine learning problems. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms.
Bio: Francesco Orabona is an Assistant Professor at Stony Brook University. His research interests are in the area of theoretically motivated and efficient machine learning algorithms, with emphasis on online and stochastic methods. He received the PhD degree in Electrical Engineering at the University of Genoa, in 2007. He is (co)author of more than 60 peer reviewed papers.