^{1}

^{2}

^{*}

^{3}

^{4}

^{5}

^{*}

^{1}

^{2}

^{3}

^{4}

^{5}

Edited by: Abdul Mueed Hafiz, University of Kashmir, India

Reviewed by: Asim Muhammad, Guangdong University of Technology, China; Atilla Göktaş, Muğla University, Turkey

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

This study describes the construction of a new algorithm where image processing along with the two-step quasi-Newton methods is used in biomedical image analysis. It is a well-known fact that medical informatics is an essential component in the perspective of health care. Image processing and imaging technology are the recent advances in medical informatics, which include image content representation, image interpretation, and image acquisition, and focus on image information in the medical field. For this purpose, an algorithm was developed based on the image processing method that uses principle component analysis to find the image value of a particular test function and then direct the function toward its best method for evaluation. To validate the proposed algorithm, two functions, namely, the modified trigonometric and rosenbrock functions, are tested on variable space.

Imaging informatics plays a significant role in medical and engineering fields. In the diagnostic application software, during the segmentation procedure, different tools are used to interact with a visualized image and a graphical user interface (GUI) is used to parameterize the algorithms and for the visualization of multi-modal images and segmentation results in 2D and 3D. Hence, different toolkits, such as Medical Interaction Toolkit (Wolf et al.,

The two-step quasi-Newton methods are considered to minimize unconstrained optimization problems.

The multi-step quasi-Newton methods, which were introduced by Ford and Moghrabi (_{i+1}) is required to satisfy the secant equation

where _{i} is the step size in the variable space _{i}

and _{i} is the step size in gradient space _{i}

The quasi-Newton equation must satisfy the true Hessian _{i+1} of Newton equation which is defined as follows:

In the case of the two-step quasi-Newton methods, the secant equation (2) is replaced by

or

which is derived by interpolating the quadratic curve

for the purpose of interpolating the Lagrange polynomial that is found suitable in Jaffar and Aamir (

the derivatives of Equations (10) and (11) are defined as

the above relations obtained are substituted in Equation (14), which is a two-step form of Equation (5)

The new secant condition for the two-step quasi-Newton method is obtained in the form of Equation (6/7), which should be satisfied by the updated Hessian approximation _{i+1}. The value of α_{i} in Equation (6) is given by

where

Hence, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula for the two-step method is defined as

The standard Lagrange polynomial

The parametric values τ_{k}, for k= 0, 1, 2....,m, used in the computation of vectors, _{i} and _{i}, are found by the metric of the form

The matrix _{i}, and _{i+1} on variable space and _{1}, _{2} ϵ ^{n}.

This metric between different iterates in the current interpolation is measured by fixed-point and accumulative approaches (Ford and Moghrabi,

•

These methods accumulate the distance between the consecutive iterates in their natural sequence. The latest iterate _{i+1}, corresponding to the value τ_{m} of τ, is considered as the origin or base point, and the other values of τ are calculated by accumulating the distance between the consecutive pairs. Therefore, we have

In the two-step method, the accumulative type is determined as A1, A2, and A3, and the parametric values are found with the help of Equation (21) for k=0,1, where the base point will be τ_{2} = 0 for m=2 from Equation (20).

•

The identity matrix

•

In this algorithm, matrix _{i}.

The above equation involves a matrix vector product which is computationally expensive. For instance, with the help of search direction, we can easily compute the parameters of the same situation as

since

By substituting Equation (26) in Equation (24), we get

The above expression is easy to calculate but the expression _{0} is very difficult to compute in every iteration. Therefore, to lessen the computational cost, Ford and Moghrabi (_{i} is an approximation of matrix N. Therefore, we obtain

using Equation (28), we have

•

In this algorithm, the choice of matrix _{i+1}, which is the Hessian approximation at _{i+1}.

Since τ_{1} and τ_{0} are expensive to compute, Equation (2) is used.

Aamir and Ford (

Different techniques in two-step methods, such as the one-step skipping technique with no modified search direction and the one-step skipping technique with modified search direction, are implemented on the selected test functions for the purpose of minimization. These functions are examined by function evaluation, the number of iterations, and time in seconds. The notation of different methods on different techniques is given in

Notation of methods with different techniques.

Accumulative two-step method with one update skipped | |

and no modified search direction | |

Accumulative two-step method with one update skipped | |

and modified search direction | |

_{i}, _{i+1} respectively on variable space |

In quasi-Newton methods, updation of Inverse Hessian approximation _{i} to _{i+1} is a very expensive procedure under certain circumstances. Tamara et al. (

Aamir and Ford (

•

The general algorithm of the skipping technique is as follows:

Select an initial approximation _{0} and _{0} and set i=1

For j=1: m (where m is the number of steps to be skipped)

Calculate a search direction _{i+j−2} _{i−1}_{i+j−2}.

Find t by giving _{i+j−2} for executing line search _{i+j−2}+_{i+j−2}.

Calculate new approximation _{j} _{j}+_{j}_{j}.

End for

3. By the use of different methods, update _{i+j−2} to give _{i+m−1}.

4. If ||_{i}|| ≤ ϵ, then stop, else i:=i+m and go to step 2. End if.

•

Now that we are at _{i+1}, the matrix is updated by _{i−1}_{i}_{i−1}_{i}, and _{i−1}, using the following steps:

1. Using the above terms, compute τ_{k} and then find δ.

2. By the use of all the above values, through which we find _{i} and _{i}, we have

3. The Hessian approximation is updated by using all the above values.

Now we compute τ_{k} and/or δ under different methods.

Here, we explained the derivation of the modified search direction. The following notations are used during the derivation of modified search direction in the skipping technique.

_{i} represents that the matrix is never computed.

Now, let us consider that the single-step BFGS updated the matrix _{i−1}. The search direction _{i−1} is defined as

By using the skipping technique, the next search direction is

With the help of _{i}, we can find the modified search direction

where

Now, the modified search direction is

and by Equation (35), we get

From the above equation, it can be observed that _{i−1}_{i−1} and λ_{i−1} cannot be easily computable due to the matrix vector product than other terms. However, with the help of Equation (34) and Equation (35), the expression _{i−1}_{i−1} can be defined as

Therefore, using the above equation in Equation (37), modified search direction can be calculated efficiently without explicitly computing

•

The general algorithm is given as follows:

Select _{0} and _{0} as an initial approximation; set i=1

For j=1:m, where m is the number of steps,

Calculate _{i+k−2} _{i−1}_{i+k−2}.

Calculate modified search direction

Do the line search along _{i+j−2}+_{i+j−2} and also providing a value of _{i+j−2} for t.

Calculate new approximation

Update _{i−1} to produce _{i+m−1} by using different methods discussed in previous sections.

Check for convergence, if it is not converged, then i=i+1 and go to step no: 2.

In the viewpoint of image processing “an image is an array or matrix of numeric values called pixels (Picture Element) arranged in columns and rows”. In mathematics an image is defined as “a graph of a spatial function” or “it is a two-dimensional function f(x,y), where x and y are the spatial (plane) coordinates, and the amplitude at any pair of coordinates (x,y) is called the intensity of the image at that level.” If x,y and the amplitude values of f are finite and discrete quantities, we call the image a digital image. A digital image is composed of a finite number of pixels, each of which has a particular location and value. Image processing is a process in which different mathematical operations are performed subject to application on the image to get improved or to extract significant information from the image for subsequent processing. When this process is applied to digital images is called digital image processing.

Digital image processing has a wide scope for researchers to work on various areas of science (such as, a agriculture, biomedical, and engineering). Previous studies showed that researchers applied and investigated different techniques of image processing for analysis and problem solving, such as detection and measurement of paddy leaf disease symptoms using image processing (Narmadha and Arulvadivu,

In the proposed strategy, the algorithm is developed by which the image values of different images I(x, y) of test functions are obtained by statistical technique, and the desired objective is achieved. In the first step, the images of different test functions are obtained with a resolution of 600 × 600 pixels. In the second step, the window of a size

Generated window denoted as matrix S.

In the third step, covariance matrix ^{(x, y)} of matrix S is computed with the help of the following equation:

In the fourth step, eigenvalues of the covariance matrix are calculated. The sum of the eigenvalues is directly proportional to edge strength, which is calculated as follows:

The third and fourth steps are done twice, the first time for the horizontal edge strength and the second time for the vertical edge strength calculation. Therefore, Equations (39) and (40) are used for calculating horizontal and vertical edge strength generation as follows:

The sum of horizontal and vertical edge strength gives the value of a pixel of I(x, y). Hence, the value of each pixel of an image is calculated as

the sum of all pixel values gives the value of an image I(x, y) defined as

Two test functions were selected from literature and were executed by using different techniques of the two-step quasi-Newton methods on variable space. The execution of test functions by every technique was computationally expensive. Therefore, an algorithm is required to enable the researchers to execute a particular function by the best method only to reduce computational cost.

Hence, our objective is to develop such an algorithm that can compute the image value of every input image of the test function and forward each function to the method by which it outperformed. The algorithm works in the following steps:

Obtain the images of each test function.

Compute the image value of each image.

Classify test functions by their image values.

Forward the particular function toward the best method.

To check the performance of different techniques used in two-step methods, we considered two test functions of different dimensions with four different starting points and epsilon value from the literature (Hillstrom et al.,

Soft: (2 ≤

Medium: (20 ≤

Hard: (61 ≤

Combined: (2 ≤

Test problem and dimensions in different test sets.

(2) | e = 10^{−7} |
(-1.2, 1.0) | (-120, 100) | (20, -20) | (6.39, -0.221) |

(20) | e = 10^{−7} |
([-1.2, 1.0]) | (1, 2,....,20) | ([6.39, -0.221]) | (-1, -1, -1, -1, -1, 1,....1) |

(26) | e = 10^{−7} |
([-1.2, 1.0]) | ([F]) | ([20]) | ([6.39, -0.221]) |

(40) | e = 10^{−7} |
([-1.2, 1]) | ([-120, 100]) | ([1, -2, 3, -4,...., -10]) | ([20]) |

(60) | e = 10^{−7} |
([-1.2, 1]) | ([F]) | ([F]) | ([6.39, -0.221]) |

(80) | e= 10^{−7} |
([-1.2, -1.0]) | ([F]) | ([F]) | ([F]) |

(100) | e=10^{−7} |
([-1.2, -1]) | ([F]) | ([F])*([F]) | |

(120) | e=10^{−7} |
([-1.2, -1]) | ([20]) | [F] | ([6.39, -0.221]) |

Test problem and dimensions in different test sets.

(16) | e = 10^{−7} |
([–2, –1, 1, 2]) | ([–2, 1.5,.., –1.5, 2]) | ([0.1, 1, –0.1, –1]) | ([2.5, 2, 1.5, 1, 0.5], 2.5) |

(32) | e = 10^{−7} |
([–2, –1, 1, 2] | ([–2, 1.5,.., –1.5, 2 ] | ([0.1, 1, –0.1, 1] | ([2.5, 2,..,0.5], 2.5, 2) |

(64) | e = 10^{−6} |
([–2, –1, 1, 2]) | ([–2, 1.5,..., –1.5, 2]) | ([0.1, 1, –0.1, 1]) | ([2.5, 2,.. 0.5], 2.5,.., 1) |

(95) | e= 10^{−5} |
([–2, –1, 1, 2],–2, –1, 1) | ([–2, 1.5,.., 2]) | ([0.1, 1.0, –0.1, 1.0]) | ([2.5, 2.0, 1.5, 1.0, 0.5]) |

(128) | e= 10^{−6} |
([–2, –1, 1, 2], –2, –1, 1) | ([–2, 1.5,.., –1.5, 2]) | ([0.1, 1, –0.1, 1]) | ([2.5, 2, 1.5, 1, 0.5], 2.5, 2, 1.5) |

(150) | e= 10^{−5} |
([–2, –1, 1, 2], –2, –1) | ([–2, 1.5, ..., 2], –2,..., –0.5, 1) | ([0.1, 1, –0.1, 1], 0.1, 1) | ([2.5, 2.0, 1.5, 1.0, 0.5]) |

The equations of both test functions are mentioned below by which 600 × 600 resolution images (displayed in

Extended Rosenbrock function:

Modified Trigonometric function:

Images of test functions.

Image values of test functions.

I1 | Extended rosenbrock | 0.1009 |

I2 | Modified trigonometric | 0.1004 |

An outline of the self decisive algorithm can be defined as follows:

^{(x, y)}.

Two test functions, namely, Rosenbrock and modified trigonometric functions, are selected from the literature (Hillstrom et al.,

It is evident from

The results of

Results of rosenbrock function of all dimension problems in a two-step method of the first technique.

0.0243 | 0 | ||||

62 | 45 | 0.0309 | 0 | Soft | |

62 | 45 | 0 | |||

65 | 46 | 0 | |||

46 | 0.0256 | 0 | Medium | ||

65 | 0.0453 | 0 | |||

65 | 46 | 0 | |||

0.0449 | 0 | Hard | |||

65 | 44 | 0.0629 | 0 |

The bold values indicate the good experimental results provided by the proposed methods in terms of function evaluation, number of iterations and time elapsed in seconds, which is one of our objectives in the study of this paper.

Results of rosenbrock function of all dimension problems in the two-step method of the second technique.

0 | |||||

62 | 34 | 0.0330 | 0 | Soft | |

62 | 31 | 0.0423 | 0 | ||

51 | 44 | 0.0230 | 0 | ||

0 | Medium | ||||

49 | 39 | 0.0193 | 0 | ||

51 | 44 | 0 | |||

0.0309 | 0 | Hard | |||

49 | 39 | 0.0405 | 0 |

The bold values indicate the good experimental results provided by the proposed methods in terms of function evaluation, number of iterations and time elapsed in seconds, which is one of our objectives in the study of this paper.

•

The behavior of both techniques was compared, and our analysis concluded that one-step skipping with no modified search direction outperformed in function evaluation and computational time, while the second technique, i.e., one-step skipping with modified search direction, showed a reduction in the number of iterations in all dimension problems.

In

Results of modified trigonometric function of all dimension problems in the two-step method of the first technique.

0 | |||||

144 | 95 | 0.0415 | 0 | Soft | |

144 | 95 | 0.0590 | 0 | ||

170 | 111 | 0 | |||

0.0521 | 0 | Medium | |||

168 | 112 | 0.0705 | 0 | ||

199 | 127 | 0 | |||

193 | 0.2117 | 0 | Hard | ||

128 | 0.2160 | 0 |

The bold values indicate the good experimental results provided by the proposed methods in terms of function evaluation, number of iterations and time elapsed in seconds, which is one of our objectives in the study of this paper.

Results of modified trigonometric function of all dimension problems in the two-step method of the second technique.

110 | 109 | 0 | |||

0.0544 | 0 | Soft | |||

109 | 104 | 0.0408 | 0 | ||

137 | 132 | 0.0544 | 0 | ||

0 | Medium | ||||

136 | 132 | 0.0568 | 0 | ||

146 | 0 | ||||

147 | 145 | 0.1895 | 0 | Hard | |

147 | 0.1869 | 0 |

•

Both techniques were compared and analyzed based on experimental results. From the analysis, it can be concluded that one-step skipping with no modified search direction outperformed in function evaluation, the number of iterations, and computational time, except the one case of medium dimension, in which the second technique, i.e., one-step skipping with modified search direction, showed a reduction in function evaluation.

An algorithm was developed to compute the image value of a particular test function and direct it to its best method for execution. The two-step quasi-Newton methods with two techniques (one-step skipping with no modified search direction and one-step skipping with modified search direction) were chosen and experimented on two test functions, namely, Rosenbrock and modified trigonometric function. The best method was determined using the experimental results obtained in terms of function evaluation, the number of iterations, and computational time. This study concluded that the one-step skipping without modification in search direction technique showed superiority over the one-step skipping with modified search direction technique under both test functions. Hence, this algorithm directed all the functions having the same image value as Rosenbrock and modified trigonometric functions to the one-step skipping technique with no modified search direction.

To further strengthen the algorithm reported in this study, we propose to investigate the image recognition in terms of picture or graph instead of image value and then direct the reported function (or medical image) to the best method available for the obtaining solution. Based on the literary research, in the future, we are planning to collaborate with some biomedical labs to validate the practicality of the proposed algorithm.

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

FJ, NA, and SA-m performed the main concept and experimental work. WM and MA were made critical revisions, reviewed, help in writing, analysis of this paper, and approved the final version. All authors contributed to the article and approved the submitted version.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups RGP.2/212/1443. WM was thankful to the Directorate of ORIC, Kohat University of Science and Technology for awarding the Project titled-Advanced Soft Computing Methods for Large-Scale Global Optimization Problems.