Skip to content

Cross validation

What is this good for?

Which impurity measurement function is better for my dataset? Will it be gini index or information gain ? And what about accuracy? It is changing each run, because it depends on the mix of samples I put into training dataset and into validation dataset.

Cross validation is in general used for tuning hyper-parameters.

We can get more relevant answers regarding questions above, if we use proper cross validation.

In our case we will use ten-fold cross-validation to asses if information gain impurity measurement is leading to more precise trees than information gain ratio.

There is no high-level implementation for cross-validation in tree-garden, but it is easy to craft one.

Let`s check code

import {
  growTree,
  buildAlgorithmConfiguration,
  sampleDataSets,
  impurity,
  prune,
  getTreeAccuracy,
  statistics,
  dataSet
} from 'tree-garden';

// lets try it on titanic data set - bundled with tree-garden
const ourDataSet = sampleDataSets.titanicSet;

// first lets build two configurations with different impurity scoring functions
const informationGainConfig = buildAlgorithmConfiguration(ourDataSet, {
  getScoreForSplit: impurity.getInformationGainForSplit
});

const informationGainRatioConfig = buildAlgorithmConfiguration(ourDataSet, {
  getScoreForSplit: impurity.getInformationGainRatioForSplit
});


// create data sets for cross validation
const crossValidationDataSets = dataSet.getKFoldCrossValidationDataSets(ourDataSet, 10);

// calculate accuracy for each configuration
const [informationGainAccuracy, informationGainRatioAccuracy] = [informationGainConfig, informationGainRatioConfig]
  .map((configuration) => {
    const accuracies = crossValidationDataSets.map(({ validation, training }) => {
      const tree = growTree(configuration, training);
      const prunedTree = prune.getPrunedTreeByReducedErrorPruning(tree, validation, configuration);
      return getTreeAccuracy(prunedTree, validation, configuration);
    });
    // output all ten configurations
    console.log(accuracies);
    return statistics.getArithmeticAverage(accuracies);
  });


console.log('Information gain average accuracy:\t', informationGainAccuracy);
console.log('Information gain ratio average accuracy:\t', informationGainRatioAccuracy);

Summary

If you run this example you will see, that information gain is not better than information gain ratio. Now equipped with this knowledge based on evidence, you can switch on information gain ratio for titanic data set and train your final tree from all data you have...