On the induction of decision trees for multiple concept learning
General Material Designation
[Thesis]
First Statement of Responsibility
U. M. Fayyad
Subsequent Statement of Responsibility
K. B. Irani
.PUBLICATION, DISTRIBUTION, ETC
Name of Publisher, Distributor, etc.
University of Michigan
Date of Publication, Distribution, etc.
1991
PHYSICAL DESCRIPTION
Specific Material Designation and Extent of Item
263
DISSERTATION (THESIS) NOTE
Dissertation or thesis details and type of degree
Ph.D.
Body granting the degree
University of Michigan
Text preceding or following the note
1991
SUMMARY OR ABSTRACT
Text of Note
We focus on developing improvements to algorithms that generate decision trees from training data. This dissertation makes four contributions to the theory and practice of the top-down non-backtracking induction of decision trees for multiple concept learning. First, we provide formal results for determining how one generated tree is better than another. We consider several performance measures on decision trees and show that the most important measure to minimize is the number of leaves. Notably, we derive a probabilistic relation between the number of leaves of the decision tree and its expected error rate. The second contribution deals with improving tree generation by avoiding problems inherent in the current popular approaches to tree induction. We formulate algorithms GID3 and GID3 that are capable of grouping irrelevant attribute values in subsets rather than branching on them individually. We empirically demonstrate that better trees are obtained. Thirdly, we present results applicable to the binary discretization of continuous-valued attributes using the information entropy minimization heuristic. The results serve to give a better understanding of the entropy measure, to point out desirable properties that justify its usage in a formal sense, and to improve the efficiency of evaluating continuous-valued attributes for cut point selection. We then proceed to extend the binary discretization algorithm to derive multiple interval quantizations. We justify our criterion for deciding the intervals using decision-theoretic principles. Empirical results demonstrate improved efficiency and that the multiple interval discretization algorithm allows GID3 to find better trees. Finally, we analyze the merits and limitations of using the entropy measure (and others from the family of impurity measures) for attribute selection. We argue that the currently used family of measures is not particularly well-suited for attribute selection. We motivate and formulate a new family of measures: C-SEP. The new algorithm, O-BTREE, that uses a selection measure from this family is empirically demonstrated to produce better trees. Ample experimental results are provided to demonstrate the utility of the above contributions by applying them to synthetic and real-world problems. Some applications come from our involvement in the automation of semiconductor manufacturing techniques.