Do you have binary classification problem (two classes) or multi-class one?
In the first case you just have to project data on a line determined by coefficients you've found. Class #1 will be at the one side, class #2 will be at another side. You can use undocumented function DSOptimalSplit2() from
bdss unit which can make a split for you. Here is description of its parameters:
Code:
Optimal binary classification
Algorithms finds optimal (=with minimal cross-entropy) binary partition.
Internal subroutine.
INPUT PARAMETERS:
A - array[0..N-1], variable
C - array[0..N-1], class numbers (0 or 1).
N - array size
OUTPUT PARAMETERS:
Info - completetion code:
* -3, all values of A[] are same (partition is impossible)
* -2, one of C[] is incorrect (<0, >1)
* -1, incorrect pararemets were passed (N<=0).
* 1, OK
Threshold- partiton boundary. Left part contains values which are
strictly less than Threshold. Right part contains values
which are greater than or equal to Threshold.
PAL, PBL- probabilities P(0|v<Threshold) and P(1|v<Threshold)
PAR, PBR- probabilities P(0|v>=Threshold) and P(1|v>=Threshold)
CVE - cross-validation estimate of cross-entropy
-- ALGLIB --
Copyright 22.05.2008 by Bochkanov Sergey
If you have multi-class classification problem, you have to project your data at the top
NClasses-1 eigenvectors obtained by LDA. Then you should use these values as inputs for some classification algorithm. There is no easy way to interpret such data when NClasses>2, so LDA is mostly used as preprocessing tool.