forum.alglib.net http://forum.alglib.net/ |
|
Levenberg–Marquardt algorithm help http://forum.alglib.net/viewtopic.php?f=2&t=834 |
Page 1 of 2 |
Author: | jcoloma [ Mon May 06, 2013 1:06 pm ] |
Post subject: | Levenberg–Marquardt algorithm help |
Hi! We are evaluating alglib for a commercial application that will use the Levenberg–Marquardt algorithm for nonlinear least-squares problem solving - curve fitting. Our question, is, in which part of the code or in wich way we can change the objective function that calculates the residual sum of squares. We have been tracing code, and we have seen that you evaluate the error in this line... state.optstate.fi[i] = vv * (state.f - state.tasky[i]) in lbl_3 but we don't understand the syntax because you are not taking care of the sign of the error, but in other places you use... state.optstate.f = state.optstate.f + math.sqr(vv * (state.f - state.tasky[i])) but we dont pass when we try our example over this line. The only line that evaluates the error is the first one. We are using the following code... Public Sub function_cx_1_func(c As Double(), x As Double(), ByRef func As Double, obj As Object) func = c(0) * (Math.Exp(-1 * c(1) * x(0))) End Sub Public Sub Main() Dim x(,) As Double = New Double(,) {{3}, {6}, {9}} '####### vector de datos x Dim y() As Double = New Double() {5, 2, 1} '###### valores observados Dim c() As Double = New Double() {10, 0.3} '############# estimada inicial Dim epsf As Double = 0.00000001 Dim epsx As Double = 0.000000001 Dim diffstep As Double = 0.00000001 Dim maxits As Integer = 0 Dim info As Integer Dim state As lsfitstate = New XAlglib.lsfitstate() ' initializer can be dropped, but compiler will issue warning Dim rep As lsfitreport = New XAlglib.lsfitreport() ' initializer can be dropped, but compiler will issue warning Dim w() As Double = New Double() {1, 1, 1} XAlglib.lsfitcreatewf(x, y, w, c, diffstep, state) xalglib.lsfitsetcond(state, epsf, epsx, maxits) XAlglib.lsfitfit(state, AddressOf function_cx_1_func, Nothing, Nothing) xalglib.lsfitresults(state, info, c, rep) End Sub |
Author: | Sergey.Bochkanov [ Mon May 06, 2013 7:53 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
Hello! Several notes regarding your question: 1. lsfit, unit you used, is a thin convenience wrapper around underying minlm optimizer. Several different points where function values are evaluated correspond to different modes of the underlying optimizer. Some of the modes are inactive in the current version of ALGLIB, that's why these lines of code are not executed in your example. We've decided to leave them in place, though, because it would be easier to re-activate them in future (if we decide to do so). 2. If, for some reason, you want to modify function being minimized, you should modify internals of lsfitfit function (one where these lines of code reside). BTW, do you want to add some kind of regularization, or just to modify weights assigned to points? Weights are easy to modify. Regularization is harder to add because it changes number of summands in sum-of-squares function. 3. Levenberg-Marquardt optimizes sum-of-squares function F(x)=f1(x)^2+...fm(x)^2. Here fi is i-th residual, error in the curve fit. Because each of summands is squares, sign does not matter here. You may have f1(x)=f-y, or f1(x)=y-f, both variants will result in exactly same function being optimized. In case you have additional questions, feel free to ask! |
Author: | jcoloma [ Fri May 10, 2013 12:37 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
Hi Sergey, The question is that we want to use bayesian optimization In bayesian you havto to optimize the sum-of-squares function in the following way the normal way F(x)=f1(x)^2+...fm(x)^2 the bayesiant way F(x)=f1(x)^2+ep(1)...fm(x)^2+ep(m) where ep is a value that is computed for each value of the function how can I do that? |
Author: | Sergey.Bochkanov [ Mon May 13, 2013 7:55 am ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
Is it true that ep(i) is ALWAYS non-negative? If so, you should use minlm subpackage directly (as I told, lsfit is a convenience wrapper) and solve problems with K=2*M functions gk, where g{2k+0} = fk(x), g{2k+1} = sqrt(ep(k)). However, you may have problems with smoothness of the objective function when ep(i) approaches to zero. Don't know how it will influence convergence speed. If some of ep() may be negative, you can not use Levenberg-Marquardt because this algorithm is aimed at optimization problems which can be reduced to sum-of-squares of smooth functions. |
Author: | jcoloma [ Mon May 13, 2013 5:10 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
ep(i) is always non negative in wich line of code in minlm you compute the error (calculated value of the function being optimized vs observed value)? |
Author: | Sergey.Bochkanov [ Mon May 13, 2013 5:13 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
You don't have to modify LM source code in order to solve your problem. This optimizer uses a vector of functions (M functions, each with N arguments) and optimizes squared sum of these functions. Just declare problem with M=2*number_of_points and a) let odd-numbered functions be errors in the fit, b) let even-numbered functions be corresponding sqrt(ep(..)) terms. |
Author: | jcoloma [ Mon May 13, 2013 5:18 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
can you make me an example with some basic function of this? |
Author: | Sergey.Bochkanov [ Tue May 14, 2013 7:04 am ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
Suppose that you have: * two points (x0,y0), (x1,y1) * function being fitted f(x|c0,c1,c2), which accepts one argument x and three tunable parameters c0, c1, c2 * two Bayesian terms e0(x,c0,c1,c2), e1(x,c0,c1,c2) In this case Levenberg-Marquard should be used to solve following optimization problem: min F(c0,c1,c2) = g0(c0,c1,c2)^2 + g1(c0,c1,c2)^2 + g2(c0,c1,c2)^2 + g3(c0,c1,c2)^2 Here: * g0(c0,c1,c2) = f(x0,c0,c1,c2)-y0 * g1(c0,c1,c2) = sqrt(e0(x0,c0,c1,c2)) * g2(c0,c1,c2) = f(x1,c0,c1,c2)-y1 * g3(c0,c1,c2) = sqrt(e1(x1,c0,c1,c2)) |
Author: | jcoloma [ Tue May 14, 2013 5:53 pm ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
We need to optimize using non-linear least squares and bayesian non-linear least-squares using n points ; (x0,y0), (x1,y1) ….. (xn,yn) the following two parameters (c0,c1) function: Function being fitted: f(x,c) = c0 . exp (-c1 . x) The desired objective function (error function) in both cases are the following: Non-linear least squares: Objective function: F(c0,c1) = ?_(i=1)^n??Wi.(f(x,c)-Yn?)2 Where Wi is the vector of weights of yi Bayesian non-linear least squares: Objective function: F(c0,c1) = ?_(i=1)^n??Wi.(f(x,c)-Yn?)2 + ?_(i=1)^2??Wcr.(Cr-C?)2 Where Cr (cr0, cr1) are the reference values of the parameters (co,c1) and Wcr is the vector of weights of the reference values of the parameters (Cr). Questions: In that subroutine and program line must be modified the error function to be replaced by a Bayesian error function?. In that subroutine and program line must be introduced the input of the reference values of the parameters (cr0, cr1) and the weight vector (Wcr) of the reference parameters (cr)?. |
Author: | jcoloma [ Wed May 15, 2013 10:45 am ] |
Post subject: | Re: Levenberg–Marquardt algorithm help |
hi sergey, can you help us! |
Page 1 of 2 | All times are UTC |
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group http://www.phpbb.com/ |