# forum.alglib.net

ALGLIB forum
 It is currently Sun Feb 17, 2019 9:53 pm

 All times are UTC

### Forum rules

1. This forum can be used for discussion of both ALGLIB-related and general numerical analysis questions
2. This forum is English-only - postings in other languages will be removed.

 Page 1 of 1 [ 1 post ]
 Print view Previous topic | Next topic
Author Message
 Post subject: L-M vs BFGS for minimizing vector functionPosted: Thu Jan 15, 2015 8:12 pm

Joined: Thu Jan 15, 2015 7:55 pm
Posts: 1
Using the L-M approach, I can find the minimum of a set of functions as, e.g., sum of squares of objective functions:
obj_function = sum(f_i ^2), F = [f_i].
In the case of supplying a gradient, I must supply the Jacobian *matrix* [J_ij] = [dF/dx_j] = [df_i/dx_j]
In fact, the example "minlm_d_vj" shows how this is done.

Is there a way to reformulate the vector function F in order to use the L-BFGS optimization? The L-BFGS / CG examples (e.g., mincg_d_1, mincg_d_2) only show the use of scalar functions f (with concomitant gradient *vector* df/dx_j.

In my case, I have a fairly "small scale" problem: F(x) =[f_i(x)], i= 1, ..., 6 and x =[x_j], j = 1, ..., 3.

/eric

Top

 Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending
 Page 1 of 1 [ 1 post ]

 All times are UTC

#### Who is online

Users browsing this forum: Bing [Bot] and 4 guests

 You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum

Search for: