Convergence of a Generalized Primal-Dual Algorithm with an Improved Condition for Saddle Point Problems

Authors

  • Fan Jiang
  • Yueying Luo
  • Xingju Cai
  • Tanxing Wang

DOI:

https://doi.org/10.4208/nmtma.OA-2024-0105

Keywords:

Generalized primal-dual algorithm, saddle point problem, convex programming, convergence rate.

Abstract

We consider a general convex-concave saddle point problem that frequently arises in large-scale image processing. First-order primal-dual algorithms have garnered significant attention due to their promising results in solving saddle point problems. Notably, these algorithms exhibit improved performance with larger step sizes. In a recent series of articles, the upper bound on step sizes has been increased, thereby relaxing the convergence-guaranteeing condition. This paper analyzes the generalized primal-dual method proposed in [B. He, F. Ma, S. Xu, X. Yuan, SIAM J. Imaging Sci. 15 (2022)] and introduces a better condition to ensure its convergence. This enhanced condition also encompasses the optimal upper bound of step sizes in the primal-dual hybrid gradient method. We establish both the global convergence of the iterates and the ergodic $\mathcal{O}(1/N)$ convergence rate for the objective function value in the generalized primal-dual algorithm under the enhanced condition.

Published

2025-05-16

Issue

Section

Articles