The relative squared error is relative to what it would have been if a simple predictor had been used. More specifically, this simple predictor is just the average of the actual values. Thus, the relative squared error takes the total squared error and normalizes it by dividing by the total squared error of the simple predictor.
Mathematically, the relative squared error E_{i} of an individual program
i is evaluated by the equation:
where P_{(ij)} is the value predicted by
the individual program i for sample case j (out of n
sample cases); T_{j} is the target value for sample case
j; andis
given by the formula:
For a perfect fit, the numerator is equal to 0 and E_{i}
= 0. So, the E_{i} index ranges from 0 to infinity, with 0
corresponding to the ideal.
To evaluate the RSE of your model both on the training and
testing data, you just have to go to the Results
Panel after a run and, although it is not shown there, it is
also evaluated there and kept for your future reference in the Report
Panel.
