Hi,
I've been using ALGLIB's Levenberg-Marquardt implementation (VJ mode, no acceleration, reusing state) for bounded non-linear optimisation, but it seems that the performance scales really badly. Example timings:
problem size: 360 unknowns and 427 squared sums objective function total time (for all evaluations of the objective function and jacobian) : 0.485 secs minlmoptimize total time: 8.836 secs
problem size: 144 unknowns and 169 squared sums objective function total time (for all evaluations of the objective function and jacobian) : 0.1 secs minlmoptimize total time: 0.745 secs
problem size: 40 unknowns and 55 squared sums objective function total time (for all evaluations of the objective function and jacobian) : 0.048 minlmoptimize total time: 0.1 secs
So in the last case, total time is 2x total objective function time. For the case before, it's 7x. For my slowest case, total evaluation time is 18x the total objective function time. I've tried it with same squared sums but less unknowns, and the performance is significantly better.
I'm using all tolerances equal to zero, as I was trying to get the best results possible as parts of my calculations are done in floating point precision. Is there any quick hint to increase performance? Is there any chance that I can get much better performance with large problems? If I can only use tolerances, which ones would be suggested for such large problems?
Thanks, Babis
|