Conjugate gradient methods

The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient. Newton's Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.

The idea of the CG method is to pick n orthogonal search directions first and, in each search direction, take exactly one step such that the step size is orthogonal to the proposed solution x at that direction. The solution is reached after n steps [2] as, theoretically, the number of iterations needed by the CG method is equal to the number of different eigenvalues of A > , i.e. at most n . This makes it attractive for large and sparse problems. The method can be used to solve least-squares problems and can also be generalized to a minimization method for general smooth functions [2] .

Theory

The definition of A-conjugate direction

To illustrate the idea of conjugacy, the coordinate axes can be specified as search directions (Figure 2). The first step (horizontal) leads to the correct x coordinate of the solution x ∗ >^> . The second step (vertical) will reach the correct y coordinate of the solution x ∗ >^> and then finish searching. We define that the error e i = x ∗ − x i >_=>^->_> is a vector that indicates how far x i >_> is from the solution. The residual g i = b − A x i >_=>->_> indicates how far Ax i >_> is from the correct value of b <\displaystyle >> . Notice that e 1 >_> is orthogonal to d 0 >_> . In general, for each step we choose a point

      x   i + 1   =   x   i   + α i     d   i     <\displaystyle >_=>_+\alpha _>_>  .

To find the value of α i > , use the fact that e i >_> should be orthogonal to d i >_> so that no need to step in the direction of d i >_> again in the later search. Using this condition, we have

      d   i  T     e   i + 1   = 0   >_^>_=0>   
      d   i  T   (   e   i   + α i     d   i   ) = 0   <\displaystyle >_^(>_+\alpha _>_)=0>   
    α i   =       d   i  T     e   i       d   i  T     d   i        <\displaystyle \alpha _=->_^>_>>_^>_>>>  .

The motivation of A-conjugacy [4]

      x      =  i = 0  n  1   α i     d   i     >^=\sum _^\alpha _>_>   

Then conduct transformation A > on both sides

     A     x      =  i = 0  n  1   α i    A     d   i     <\displaystyle >^=\sum _^\alpha _>_>   
      d   k  T    A     x      =  i = 0  n  1   α i     d   k  T    A     d   i     <\displaystyle >_^>^=\sum _^\alpha _>_^>_>   
      d   k  T    b   = α k     d   k  T    A     d   k     <\displaystyle >_^>=\alpha _>_^>_>   

So α k > can be obtained as the following:

    α k   =      d   k  T    b       d   k  T    A     d   k        <\displaystyle \alpha _=>_^>>>_^>_>>>   
      x      =  i = 0  n  1   α i     d   i   =  i = 0  n  1        d   i  T    b       d   i  T    A     d   i        d   i     >^=\sum _^\alpha _>_=\sum _^<\frac <>_^>><>_^>_>>>_>  .

Conjugate Direction Theorem

      x   k + 1   =   x   k   + α k     d   k     <\displaystyle >_=>_+\alpha _>_>   
      g   k   =  b     A     x   k     >_=>->_>   
    α k   =      g   k  T     d   k       d   k  T    A     d   k      =   (  b     A     x   k   ) T     d   k       d   k  T    A     d   k        <\displaystyle \alpha _=>_^>_><>_^>_>>=>->_)^>_><>_^>_>>>   

Proof:
The error e 0 > from the starting point to the solution can be formulated as the following:

      x         x   0   = α 0     d   0   + α 1     d   1   + . . . + α n  1     d   n  1     <\displaystyle >^->_=\alpha _>_+\alpha _>_+. +\alpha _>_>   

And the traversed steps from the starting point to the point x k >_> can be expressed as the following:

      x   k      x   0   = α 0     d   0   + α 1     d   1   + . . . + α k  1     d   k  1     <\displaystyle >_->_=\alpha _>_+\alpha _>_+. +\alpha _>_>   

The residual term from the solution point to the point x k >_> can be expressed as the following:

      g   k   =  b     A     x   k   =  A     x        A     x   k   =  A   (   x         x   k   )   >_=>->_=>^->_=(>^->_)>   

Therefore, the α k > is as the following:

      d   k  T    A   (   x         x   0   ) =   d   k  T    A   ( α 0     d   0   + α 1     d   1   + . . . + α n  1     d   n  1   ) = α k     d   k  T    A     d   k     <\displaystyle >_^(>^->_)=>_^(\alpha _>_+\alpha _>_+. +\alpha _>_)=\alpha _>_^>_>   
    α k   =      d   k  T    A   (   x         x   0   )     d   k  T    A     d   k        <\displaystyle \alpha _=>_^(>^->_)>>_^>_>>>   

But, still, the solution x ∗ >^> has to be known first to calculate α k > . So, the following manipulations can get rid of using x ∗ >^> in the above expression:

      d   k  T    A   (   x   k      x   0   ) =   d   k  T    A   ( α 0     d   0   + α 1     d   1   + . . . + α k  1     d   k  1   ) = 0   <\displaystyle >_^(>_->_)=>_^(\alpha _>_+\alpha _>_+. +\alpha _>_)=0>   
      d   k  T    A   (   x         x   0   ) =   d   k  T    A   (   x         x   k   +   x   k      x   0   ) =   d   k  T    A   (   x         x   k   )   <\displaystyle >_^(>^->_)=>_^(>^->_+>_->_)=>_^(>^->_)>   
    α k   =      d   k  T    A   (   x         x   0   )     d   k  T    A     d   k      =      d   k  T    A   (   x         x   k   )     d   k  T    A     d   k      =      d   k  T     g   k       d   k  T    A     d   k        <\displaystyle \alpha _=>_^(>^->_)>>_^>_>>=>_^(>^->_)>>_^>_>>=>_^>_>>_^>_>>>   

The conjugate gradient method

The conjugate gradient method is a conjugate direction method in which selected successive direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. The conjugate directions are not specified beforehand but rather are determined sequentially at each step of the iteration [4] . If the conjugate vectors are carefully chosen, then not all the conjugate vectors may be needed to obtain the solution. Therefore, the conjugate gradient method is regarded as an iterative method. This also allows approximate solutions to systems where n is so large that the direct method requires too much time [2] .

Derivation of Algorithm [2]

Set the maximum iteration n

      d   k + 1   =   g   k     i = 1  k        g   k  T    A     d   i       d   i  T    A     d   i        d   i     <\displaystyle >_=>_-\sum _^<\frac <>_^>_><>_^>_>>>_>   

A l g 2 : R e v i s e d a l g o r i t h m f r o m A l g 1 w i t h o u t p a r t i c u l a r c h o i c e o f D _ >>

Set the maximum iteration n


The formulas in the Alg 2 can be simplified as the following:

      x   i   =   x   i  1   + α i     d   i     <\displaystyle >_=>_+\alpha _>_>        b     A     x   i   =  b     A     x   i  1    α i    A     d   i     <\displaystyle >->_=>->_-\alpha _>_>         g   i   =   g   i  1    α i    A     d   i     <\displaystyle >_=>_-\alpha _>_>   

Then β i > and α i > can be simplified by multiplying the above gradient formula by g i >_> and g i − 1 >_> as the following:

      g   i  T     g   i   =  α i     g   i  T    A     d   i     <\displaystyle >_^>_=-\alpha _>_^>_>         g   i  1  T     g   i  1   = α i     g   i  1  T    A     d   i     <\displaystyle >_^>_=\alpha _>_^>_>   
      g   i  1  T     g   i  1   = α i     g   i  1  T    A     d   i   = α i     d   i  T    A     d   i     <\displaystyle >_^>_=\alpha _>_^>_=\alpha _>_^>_>   
    β i + 1   =       g   i  T     g   i       g   i  1  T     g   i  1        =->_^>_><>_^>_>>>   

This gives the following simplified version of Alg 2:

A l g 3 : S i m p l i f i e d v e r s i o n f r o m A l g 2 _ >>

Set the maximum iteration n

Numerical example

Consider the linear system A x = b >=>>

     A    x   =  [   5  1    1  8    ]    [    x 1       x 2      ]   =  [   3    2    ]   =  b     >=5&1\\1&8\\\end>x_\\x_\end>=3\\2\end>=>>  .

The initial starting point is set to be

      x   0   =  [   2    1    ]     >_=2\\1\end>>  .

Implement the conjugate gradient method to approximate the solution to the system.

Solution:
The exact solution is given below for later reference:

      x      =  [   22 /  39    7 /  39    ]     [   0.5641    0.1794    ]     >_=22/39\\7/39\end>\approx 0.5641\\0.1794\end>>  .

Step 1: since this is the first iteration, use the residual vector g 0 >_> as the initial search direction d 1 >_> . The method of selecting d k >_> will change in further iterations.

      g   0   =  b     A     x   0   =  [   3    2    ]     [   5  1    1  8    ]    [   2    1    ]   =  [    8     8    ]   =   d   1     >_=>->_=3\\2\end>-5&1\\1&8\\\end>2\\1\end>=-8\\-8\end>=>_>   

Step 2: the scalar α 1 > can be calculated using the relationship

    α 1   =      g   0  T     g   0       d   1  T    A     d   1      =     [    8   8    ]    [    8     8    ]      [    8   8    ]    [   5  1    1  8    ]    [    8     8    ]      =  2 15     <\displaystyle \alpha _=>_^>_>>_^>_>>=-8&-8\\\end><\begin-8\\-8\end>><<\begin-8&-8\\\end><\begin5&1\\1&8\\\end><\begin-8\\-8\end>>>=>>   

Step 3: so x 1 >_> can be reached by the following formula, and this step finishes the first iteration, the result being an "improved" approximate solution to the system, x 1 >_> :

      x   1   =   x   0   + α 1     d   1   =  [   2    1    ]   +  2 15    [    8     8    ]   =  [   0.9333     0.0667    ]     <\displaystyle >_=>_+\alpha _>_=2\\1\end>+>-8\\-8\end>=0.9333\\-0.0667\end>>   

Step 4: now move on and compute the next residual vector g 1 >_> using the formula

      g   1   =   g   0    α 1    A     d   1   =  [    8     8    ]     2 15    [   5  1    1  8    ]    [    8     8    ]   =  [    1.6    1.6    ]     <\displaystyle >_=>_-\alpha _>_=-8\\-8\end>->5&1\\1&8\end>-8\\-8\end>=-1.6\\1.6\end>>   

Step 5: compute the scalar β 2 > that will eventually be used to determine the next search direction d 2 >_> .

    β 2   =       g   1  T     g   1       g   0  T     g   0      =      [    1.6  1.6    ]    [    1.6    1.6    ]      [    8   8    ]    [    8     8    ]      =  0.04   =->_^>_><>_^>_>>=--1.6&1.6\\\end><\begin-1.6\\1.6\end>><<\begin-8&-8\\\end><\begin-8\\-8\end>>>=-0.04>   

Step 6: use this scalar β 2 > to compute the next search direction d 2 >_> :

      d   2   =   g   1    β 2     d   1   =  [    1.6    1.6    ]   + 0.04  [    8     8    ]   =  [    1.92    1.28    ]     <\displaystyle >_=>_-\beta _>_=-1.6\\1.6\end>+0.04-8\\-8\end>=-1.92\\1.28\end>>   

Step 7: now compute the scalar α 2 > using newly acquired d 2 >_> by the same method as that used for α 1 > .

    α 2   =      g   1  T     g   1       d   2  T    A     d   2      =     [    1.6  1.6    ]    [    1.6    1.6    ]      [    1.92  1.28    ]    [   5  1    1  8    ]    [    1.92    1.28    ]      = 0.1923   <\displaystyle \alpha _=>_^>_>>_^>_>>=-1.6&1.6\end><\begin-1.6\\1.6\end>><<\begin-1.92&1.28\\\end><\begin5&1\\1&8\\\end><\begin-1.92\\1.28\end>>>=0.1923>   

Step 8: finally, find x 2 >_> using the same method as that used to find x 1 >_> .

      x   2   =   x   1   + α 2     d   2   =  [   0.9333     0.0667    ]   + 0.1923  [    1.92    1.28    ]   =  [   0.5641    0.1794    ]     <\displaystyle >_=>_+\alpha _>_=0.9333\\-0.0667\end>+0.1923-1.92\\1.28\end>=0.5641\\0.1794\end>>   

Therefore, x 2 >_> is the approximation result of the system.

Application

Conjugate gradient methods have often been used to solve a wide variety of numerical problems, including linear and nonlinear algebraic equations, eigenvalue problems and minimization problems. These applications have been similar in that they involve large numbers of variables or dimensions. In these circumstances any method of solution which involves storing a full matrix of this large order, becomes inapplicable. Thus recourse to the conjugate gradient method may be the only alternative [5] . Here we demonstrate three application examples of CG method.

Iterative image reconstruction

The conjugate gradient method is used to solve for the update in iterative image reconstruction problems. For example, in the magnetic resonance imaging (MRI) contrast known as quantitative susceptibility mapping (QSM), the reconstructed image χ is iteratively solved for from magnetic field data b >> by the relation [6]

     b   =  D   χ   >=>\chi >   

Where D is the matrix expressing convolution with the dipole kernel in the Fourier domain. Given that the problem is ill-posed, a physical prior is used in the reconstruction, which is framed as a constrained L1 norm minimization

   m i n χ     f ( χ )    1     \left\|f(\chi )\right\|_>   
   s . t .   g ( χ )  c    2  2     <\displaystyle s.t.\left\|g(\chi )-c\right\|_^>   

A detailed treatment of the function f ( χ ) and g ( χ ) can be found at [6] . This problem can be expressed as an unconstrained minimization problem via the Lagrange Multiplier Method

   m i n χ , λ   E ( χ , λ )   E(\chi ,\lambda )>   
   E ( χ , λ )    f ( χ )    1   + λ (   g ( χ )  c    2  2    ε )   +\lambda (\left\|g(\chi )-c\right\|_^-\varepsilon )>   

The first-order conditions require ∇ χ E ( χ , λ ) = 0 E(\chi ,\lambda )=0> and ∇ λ E ( χ , λ ) = 0 E(\chi ,\lambda )=0> . These conditions result in ∇ χ f ( χ ) + ∇ χ g ( χ ) − c ~ = 0 f(\chi )+\nabla _g(\chi )->=0> and ‖ g ( χ ) − c ‖ 2 2 ≈ ε ^\approx \varepsilon > , respectively. The update can be solved for ∇ χ f ( χ ) + ∇ χ g ( χ ) − c ~ = L ( χ ) χ − c ~ = 0 f(\chi )+\nabla _g(\chi )->=L(\chi )\chi ->=0> via fixed point iteration [6] .

    χ n + 1   = L  1   ( χ n   )   c ~      =L^(\chi ^)>>   

And expressed as the quasi-Newton problem, more robust to round-off error [6]

    χ n + 1   = χ n    L  1   ( χ n   )  E ( χ n   , λ )   =\chi _-L^(\chi _)\nabla E(\chi _,\lambda )>   

Which is solved with the CG method until the residual ‖ χ n + 1 − χ ‖ 2 / ‖ χ n ‖ 2 ≤ θ -\chi \right\|_/\left\|\chi _\right\|_\leq \theta > where θ is a specified tolerance, such as 10 − 2 > .

Additional L1 terms, such as a downsampled term [7] can be added, in which case the cost function is treated with penalty methods and the CG method is paired with Gauss-Newton to address nonlinear terms.

Facial recognition

The realization of face recognition can be achieved by the implementation of conjugate gradient algorithms with the combination of other methods. The basic steps is to decompose images into a set of time-frequency coefficients using discrete wavelet transform (DWT) (Figure 3) [8] . Then use basic back propagation (BP) to train a neural network (NN). And to overcome the slow convergence of BP using the steepest gradient descent, conjugate gradient methods are introduced. Generally, there are four types of CG methods for training a feed-foward NN, namely, Fletcher-Reeves CG, Polak-Ribikre CG, Powell-Beale CG, and scaled CG. All the CG methods include the steps demonstrated in Alg 3 with their respective modifications [8] [10] .

Notice that β k > is a scalar that varies in different versions of CG methods. β k > for Fletcher-Reeves CG is defined as the following:

    β i + 1   =       g   i  T     g   i       g   i  1  T     g   i  1        =->_^>_><>_^>_>>>   

where g i − 1 >_> and g i >_> are the previous and current gradients, respectively. Fletcher Reeves suggested restarting the CG after every n or n+1 iterations. This suggestion can be very helpful in practice [10] .

The second version of the CG, namely, Polak-Ribiere CG, was proposed in 1977 [11] . In this algorithm, the scalar at each iteration is computed as [12] :

    β i + 1   =    Δ   g   i  1  T     g   i       g   i  1  T     g   i  1        =->_^>_><>_^>_>>>   

Powell has showed that Polak-Ribiere CG method has better performance compared with Fletcher-Reeves CG method in many numerical experiments [10] .

The mentioned CGs, Fletcher-Reeves and Polak-Ribitre algorithms, periodically restart by using the steepest descent direction every n or n + 1 iterations. A disadvantage of the periodical restarting is that the immediate reduction in the objective function is usually less than it would be without restart. Therefore, Powell proposed a new method of restart based on an earlier version [10] [12] . As stated by this method, when there is negligibility orthogonality left between the current gradient and the previous gradient (the following condition is satisfied), the restart occurs [10] [12] .

Another version of the CG, scaled CG, was proposed by Moller. This algorithm uses a step size scaling mechanism that avoids a time consuming line-search per learning iteration. Moller [13] has showed that the scaled CG method has superlinear convergence for most problems.

Conjugate gradient method in deep learning

The conjugate gradient method introduced hyperparameter optimization in deep learning algorithm can be regarded as something intermediate between gradient descent and Newton's method, which does not require storing, evaluating, and inverting the Hessian matrix, as it does Newton's method. Optimization search in conjugate gradient is performed along with conjugate directions. They generally produce faster convergence than gradient descent directions. Training directions are conjugated concerning the Hessian matrix.

Let’s denote d >> as the training direction vector. Starting with an initial parameter vector w 0 >_> and an initial training direction vector d 0 = − g 0 >_=->_> , the conjugate gradient method constructs a sequence of training direction as:

      d   i + 1   =   g   i + 1   + γ i     d   i         f o r i = 0 , 1 , . . .   <\displaystyle >_=>_+\gamma _>_\;\;\;\;\;\;\;for\;i=0,1. >   

Here γ is called the called the conjugate parameter. For all conjugate gradient algorithms, the training direction is periodically reset to the negative of the gradient. The parameters are then improved according to the following expression:

      w   i + 1   =   w   i   + η i     d   i         f o r i = 0 , 1 , . . .   <\displaystyle >_=>_+\eta _>_\;\;\;\;\;\;\;for\;i=0,1. >   

The training rate η is usually found by line minimization. Figure 4 depicts an activity diagram for the training process with the conjugate gradient. To improve the parameters, first compute the conjugate gradient training direction. Then, search for a suitable training rate in that direction. This method has proved to be more effective than gradient descent in training neural networks. Also, conjugate gradient performs well with vast neural networks since it does not require the Hessian matrix.

Conclusion

Reference

  1. ↑ Refsnæs, Runar Heggelien. “A Brief introduction to the conjugate gradient method.” (2009).
  2. ↑ 2.02.12.22.3 W. Stuetzle, “The Conjugate Gradient Method.” 2001. [Online]. Available: https://sites.stat.washington.edu/wxs/Stat538-w03/conjugate-gradients.pdf
  3. ↑ Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.
  4. ↑ 4.04.1 A. Singh and P. Ravikumar, “Conjugate Gradient Descent.” 2012. [Online]. Available: http://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/conjugate_direction_methods.pdf
  5. ↑ R. Fletcher, “Conjugate gradient methods for indefinite systems,” in Numerical Analysis, Berlin, Heidelberg, 1976, pp. 73–89. doi: 10.1007/BFb0080116.
  6. ↑ 6.06.16.26.3 T. Liu et al., “Morphology enabled dipole inversion for quantitative susceptibility mapping using structural consistency between the magnitude image and the susceptibility map,” NeuroImage, vol. 59, no. 3, pp. 2560–2568, Feb. 2012, doi: 10.1016/j.neuroimage.2011.08.082.
  7. ↑ A. Roberts, P. Spincemaille, T. Nguyen, Y. Wang, “International Society for Magnetic Resonance in Medicine,” in MEDI-d: Downsampled Morphological Priors for Shadow Reduction in Quantitative Susceptibility Mapping, 2021.
  8. ↑ 8.08.18.2 H. Azami, M. Malekzadeh, and S. Sanei, “A New Neural Network Approach for Face Recognition Based on Conjugate Gradient Algorithms and Principal Component Analysis,” Journal of Mathematics and Computer Science, vol. 6, no. 3, pp. 166–175, 2013, doi: 10.22436/jmcs.06.03.01.
  9. ↑ H. Azami, M. R. Mosavi, S. Sanei, Classification of GPS satellites using improved back propagation training algorithms, Wireless Personal Communications, Springer-Verlog, DOI 10.1007/s11277-012-0844-7 (2012)
  10. ↑ 10.010.110.210.310.4 M. H. Shaheed, Performance analysis of 4 types of conjugate gradient algorithms in the nonlinear dynamic modelling of a TRMS using feedforward neural networks, IEEE Conference on Systems, Man and Cybernetics, (2004), 5985-5990
  11. ↑ S. Sheel, T. Varshney, R. Varshney, Accelerated learning in MLP using adaptive learning rate with momentum coefficient, International Conference on Industrial and Information Systems, (2007), 307-310
  12. ↑ 12.012.112.2 Z. Zakaria, N. A. M. Isa, S. A. Suandi, A study on neural network training algorithm for multiface detection in static images, International Conference on Computer, Electrical, Systems Science, and Engineering, (2010), 170-173
  13. ↑ M. F. Moller, A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks, 6 (1993), 525-533
  14. ↑ Quesada and Artelnics, “5 Algorithms to Train a Neural Network.” https://www.neuraldesigner.com/blog/5_algorithms_to_train_a_neural_network