|L-M vs BFGS for minimizing vector function
|Page 1 of 1|
|Author:||erehm [ Thu Jan 15, 2015 8:12 pm ]|
|Post subject:||L-M vs BFGS for minimizing vector function|
Using the L-M approach, I can find the minimum of a set of functions as, e.g., sum of squares of objective functions:
obj_function = sum(f_i ^2), F = [f_i].
In the case of supplying a gradient, I must supply the Jacobian *matrix* [J_ij] = [dF/dx_j] = [df_i/dx_j]
In fact, the example "minlm_d_vj" shows how this is done.
Is there a way to reformulate the vector function F in order to use the L-BFGS optimization? The L-BFGS / CG examples (e.g., mincg_d_1, mincg_d_2) only show the use of scalar functions f (with concomitant gradient *vector* df/dx_j.
In my case, I have a fairly "small scale" problem: F(x) =[f_i(x)], i= 1, ..., 6 and x =[x_j], j = 1, ..., 3.
|Page 1 of 1||All times are UTC|
|Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group