ϟ
 
DOI: 10.1109/34.709601
¤ OpenAccess: Green
This work has “Green” OA status. This means it may cost money to access on the publisher landing page, but there is a free copy in an OA repository.

The random subspace method for constructing decision forests

Tin Kam Ho

Overfitting
Random subspace method
Decision tree
1998
Much of previous attention on decision trees focuses on the splitting criteria and optimization of tree sizes. The dilemma between overfitting and achieving maximum accuracy is seldom resolved. A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. The classifier consists of multiple trees constructed systematically by pseudorandomly selecting subsets of components of the feature vector, that is, trees constructed in randomly chosen subspaces. The subspace method is compared to single-tree classifiers and other forest construction methods by experiments on publicly available datasets, where the method's superiority is demonstrated. We also discuss independence between trees in a forest and relate that to the combined classification accuracy.
Loading...
    Cite this:
Generate Citation
Powered by Citationsy*
    The random subspace method for constructing decision forests” is a paper by Tin Kam Ho published in 1998. It has an Open Access status of “green”. You can read and download a PDF Full Text of this paper here.