Hi folks,
I wanted to get some help with regards to the use of the minlmoptimize Lev-Marq. optimization function from ALGLIB.
I am using the FGH mode, so where the function is not in terms of sums of square residuals but in therms of cost_Function val, Gradient and Hessian.
I've used
minlmoptimize(state, costfn_func, costfn_grad, costfn_hess, costfn_rep, &dataIn);
inside my class following the "minlm_d_fgh" example example that comes with the documentation and everything works nice and well and everyone is happy.
What I really dont understand is why inside all the callback methods "costfn_func", "costfn_grad" and "costfn_hess" everything has to be redefined and recalculated.
So inside the gradient function "costfn_grad" the cost function has to be recalcuated, and inside the Hessian function "costfn_hess" both the gradient and the cost function have to be recalculated. Just like in the example
http://www.alglib.net/translator/man/ma ... inlm_d_fghthe func and grad are recalculated in the hessian, and the func are recalculated in the gradient.
Is there any reason for that?
It seems a bit wastefull doesnt it?
I know that if I remove the additional recalculations the optmisation does not work any more.
Any help would be appreciated.
Thanks