Hi,
I have 2D empirical data which looks like an error function which i shall call F(xi) where i={1,2,3...m}. Im trying to fit a function to this data of the form:
f(c_n,x_i)=c0+sum_{i=1:3}(ai/(exp((xi-bi)/ci)+1)) note:(c_n={c0,a1,b1,c1,a2...,c3}).
so I have 10 coefficients TBD. After successfully completing the task with lsfit I decided that this takes 20X too long for me. I decided to use LM to minimize the sum of square errors i.e.
sum_{i=1:m}(f(c_n,xi)-F(xi))^2)
I initialize the 10 coefficients with some values based on the empirical data, use:
alglib.minlmcreatevj(m, c, out state); alglib.minlmsetcond(state, epsx, maxits); alglib.minlmsetscale(state, s); alglib.minlmoptimize(state, Function1_fvec, Function1_jac, null, null);
where m=300, c is the coefficients vector and s is a 10 element long vector of ones. i defined fvec and jac as per the examples in the user guide (but I cant seem to get into fvec during debugging, only into jac), but since I have 300 equations, where the variables are not the xi but c_n, I had to use for loops to define them all. all this and I get a resulting vector c of coefficients where each value is slightly different than the value it was initialized with e.g. c[3] was initialized 0.15 and came back as 0.149999...
any thoughts? help?
thx Reuven
|