Main Article Content

Nonlinear Conjugate Gradient Methods for Unconstrained Test Functions through an Approximate Wolfe Line Search


Ishaq A. Adam
Ayinde M. Abdullahi
Oyedepo Taiye
Ibrahim Salihu

Abstract

The nonlinear conjugate gradient method stands out as a potent iterative approach for tackling unconstrained large-scale optimization  problems. A crucial aspect of any conjugate gradient algorithm lies in determining an optimal step length, a task for which various  strategies have been put forth. To assess and contrast the performance of the approximate Wolfe line search technique, we conducted a  numerical test across nine variants of nonlinear conjugate gradient methods. Through our experiments, a notable finding emerged: the  Dai-Yuan nonlinear conjugate gradient method demonstrated a swifter convergence compared to its counterparts. The utilization of the  approximate Wolfe line search technique, coupled with the distinctive features of the Dai-Yuan variant, contributed to its enhanced efficiency in navigating the optimization landscape. This empirical exploration sheds light on the nuanced dynamics within nonlinear  conjugate gradient methods and underscores the significance of the selected strategy for approximating the Wolfe line search. The  observed faster convergence of the Dai-Yuan method not only validates its efficacy but also suggests its potential applicability in  scenarios where rapid and effective optimization is paramount.


Journal Identifiers


eISSN: 2705-3121
print ISSN: 2705-313X