Extended Regularized Dual Averaging Methods for Stochastic Optimization

Authors

  • Jonathan W. Siegel Department of Mathematics, Pennsylvania State University, University Park, PA
  • Jinchao Xu Department of Mathematics, Pennsylvania State University, University Park, PA 16802, USA

DOI:

https://doi.org/10.4208/jcm.2210-m2021-0106

Keywords:

Convex Optimization, Subgradient Methods, Structured Optimization, Non-smooth Optimization.

Abstract

We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized stochastic optimization problems, which generalizes the regularized dual averaging (RDA) method. The main novelty of the method is that it allows a flexible control of the backward step size. For instance, the backward step size used in RDA grows without bound, while for XRDA the backward step size can be kept bounded. We demonstrate experimentally that additional control over the backward step size can speed up the convergence of the algorithm while preserving desired properties of the iterates, such as sparsity. Theoretically, we show that the XRDA method achieves the same convergence rate as RDA for general convex objectives.

Published

2023-04-25

Issue

Section

Articles