Hello!
Yes, some of the old functionality was reimplemented differently in new ALGLIB. Here is description and rationale for changes: 0. old approach was sometimes criticized for having functions with too many parameters. New "trainer object" interface allows to separate dataset specification, algorithm tuning, and training itself. Furthermore, it is flexible enough so we can add new functionality (say, new stopping criteria) without ruining backward compatibility. 1. critical errors (like having negative number of classes, or incorrect class number in the dataset) are now signaled by exceptions. The reason is that these errors should not occur during normal workflow under any circumstances. From the other side, the only "normal" completion code returned by training function was "2 = problem solved". I.e. the whole idea of the completion codes turned out to be completely uninformative under normal workflow. 2. ALGLIB uses its own exception class (ap_error in c++, alglibexception in c#) which has one string field containing textual description of the error. You may use this field if you need human-readable description of the error. 3. unfortunately, in new versions of ALGLIB NN interface you can not use Levenberg-Marquardt training. This algorithm turned out to be impractical for everything except for small-scale problems.
BTW, what programming language do you use - C++ or C#? If you use C#, then upcoming ALGLIB 3.8.1 may bring you even larger speedup - in this version you will be able to use optimized C computational core from C#. And C version of neural networks is several times faster than C# one due to use of SSE intrinsics.
|