Convergence Rate of Gradient Descent Method for Multi-Objective Optimization

Authors

  • Liaoyuan Zeng Institute of Computational Mathematics and Scientific/Engineering Computing, State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
  • Yuhong Dai Institute of Computational Mathematics and Scientific/Engineering Computing, State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
  • Yakui Huang School of Science, Hebei University of Technology, Tianjin, China

DOI:

https://doi.org/10.4208/jcm.1808-m2017-0214

Keywords:

Multi-objective optimization, Gradient descent, Convergence rate.

Abstract

The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step sizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.

Published

2019-04-29

Issue

Section

Articles