Convergence Rate of Gradient Descent Method for Multi-Objective Optimization
DOI:
https://doi.org/10.4208/jcm.1808-m2017-0214Keywords:
Multi-objective optimization, Gradient descent, Convergence rate.Abstract
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step sizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.
Downloads
Published
2019-04-29
Issue
Section
Articles