NEW TWO-STEP CONJUGATE GRADIENT
METHOD FOR UNCONSTRAINED OPTIMIZATION

Abstract

Two-step methods are secant-like techniques of the quasi-Newton type that, unlike the classical methods, construct nonlinear alternatives to the quantities used in the so-called Secant equation. Two-step methods instead incorporate data available from the two most recent iterations and thus create an alternative to the Secant equation with the intention of creating better Hessian approximations that induce faster convergence to the minimizer of the objective function. Such methods, based on reported numerical results published in several research papers related to the subject, have introduced substantial savings in both iteration and function evaluation counts. Encouraged by the successful performance of the methods, we explore in this paper employing them in developing a new Conjugate Gradient (CG) algorithm. CG methods gain popularity on big problems and in situations when memory resources are scarce. The numerical experimentations on the new methods are encouraging and open venue for further investigation of such techniques to explore their merits in a multitude of applications.

Citation details of the article



Journal: International Journal of Applied Mathematics
Journal ISSN (Print): ISSN 1311-1728
Journal ISSN (Electronic): ISSN 1314-8060
Volume: 33
Issue: 5
Year: 2020

DOI: 10.12732/ijam.v33i5.8

Download Section



Download the full text of article from here.

You will need Adobe Acrobat reader. For more information and free download of the reader, please follow this link.

References

  1. [1] M. Al-Baali, New property and global convergence of the Fletcher-Reeves method with inexact line searches, IMAJ. Numer. Anal., 5 (1985), 122-124.
  2. [2] N. Anderi, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Letters, 20 (2007), 645-650.
  3. [3] N. Andrei, New conjugate gradient algorithms based on self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno method. Calcolo, 57, No 17 (2020); doi:10.1007/s10092-020-00365-7.
  4. [4] C.G. Broyden, The convergence of a class of double-rank minimization algorithms - Part 2: The new algorithm, J. Inst. Math. Applic., 6 (1970), 222-231.
  5. [5] R.H. Byrd, R.B. Schnabel, G.A. Shultz, Parallel quasi-Newton methods for unconstrained optimization, Math. Programing, 42 (1988), 273-306.
  6. [6] Y.H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177-182.
  7. [7] J.E. Dennis, R.B. Schnabel, Least change secant updates for quasi-Newton methods, SIAM Review, 21 (1979), 443-459.
  8. [8] R. Fletcher, Practical Methods of Optimization, 2nd Ed., Wiley, New York (1987).
  9. [9] R. Fletcher, A new approach to variable metric algorithms, Comput. J., 13 (1970), 317-322.
  10. [10] R. Fletcher, C. Reeves, Function minimization by conjugate gradients, Computer J., 7 (1964), 149-154.
  11. [11] J.A. Ford, I.A.R. Moghrabi, Using function-values in multi-step quasiNewton methods, J. Comput. Appl. Math., 66 (1996), 201-211.
  12. [12] J.A. Ford, I.A.R. Moghrabi, Multi-step quasi-Newton methods for optimization, J. Comput. Appl. Math., 50 (1994), 305-323.
  13. [13] J.A. Ford, I.A.R. Moghrabi, Alternative parameter choices for multi-step quasi-Newton methods, Optimization Methods and Software, 2 (1993), 357370.
  14. [14] J.A. Ford, Y. Narushima, H. Yabe, Multi-step nonlinear conjugate gradient methods for unconstrained minimization, Comput. Optim. Appl., 40 (2008), 191-216.
  15. [15] W. Hager, H.C. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192.
  16. [16] M.R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Stan. Sec. B, 48 (1952), 409-436.
  17. [17] Y. Liu, C. Storey, Efficient generalized conjugate gradient algorithms, Part1: Theory, J. Optim. Theory Appl., 69 (1991), 129-137.
  18. [18] I.A.R. Moghrabi, Numerical experience with multiple update quasi-newton methods for unconstrained optimization, J. of Math. Modeling and Algorithms , 6 (2007), 231-238.
  19. [19] I.A.R.Moghrabi, Implicit extra-update multi-step quasi-newton methods, Int. J. Operational Research, 28 (2017), 69-81.
  20. [20] J.J. Mor´e, B.S. Garbow, K.E. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Softw., 7 (1981), 17-41.
  21. [21] A. Perry, A modified conjugate gradient algorithm, Oper. Res., 26 (1978), 1073-1078.
  22. [22] E. Polak, G. Ribi´ere, Notesurla convergence de directions conjugu´ees, Rev. Francaise Infomat Recherche Operatonelle, 3e Ann´ee, 16 (1969), 35-43.
  23. [23] B.T. Polyak, The conjugate gradient method in extreme problems, USSR Comp. Math. Phys., 9 (1969), 94-112.
  24. [24] M.J.D. Powell, Restart procedures for the conjugate gradient method, Math. Program., 12 (1977), 241-254.
  25. [25] D. Salane, R.P. Tewarson, On symmetric minimum norm updates, IMA J. Numer. Anal., 9, No 1 (1983), 235-240.
  26. [26] D.F. Shanno, Conditioning of quasi-Newton methods for function minimization, Math. Comp., 24 (1970), 647-656.
  27. [27] D.F. Shanno, K.H. Phua, Matrix conditioning and nonlinear optimization, Math. Programming, 14(1978), 149-160.
  28. [28] D.F. Shanno, On the convergence of a new conjugate gradient algorithm, SIAM J. Numer. Anal., 15 (1978), 1247-1257.
  29. [29] P. Wolfe, Convergence conditions for ascent methods II: Some corrections, SIAM Rev., 13 (1971), 185-188.
  30. [30] L. Wong, , M. Cao, F. Xing, The new spectral conjugate gradient method for large-scale unconstrained optimisation, J. Inequal. Appl., 111 (2020).