Sections
You are here: Home » Development » Documentation » Components » J48

J48

Contact: Stefan Kramer

Categories: Prediction

Exposed methods:

predict
Input: Instances, feature vectors, class values
Output: Classification model (decision tree)
Input format: Weka's ARFF format
Output format: Plain text, model binary
User-specified parameters: The user can choose whether to use binary splits on nominal attributes when building the trees, the minimum number of instances per leaf, whether counts at leaves are smoothed based on Laplace, whether pruning is performed, whether to consider the subtree raising operation when pruning , the confidence factor used for pruning (smaller values incur more pruning), whether reduced-error pruning is used instead of C.4.5 pruning (amount of data used for reduced-error pruning (one fold is used for pruning, the rest for growing the tree), seed used for randomizing the data when reduced-error pruning is used).
Reporting information: Performance measures (Confusion matrix, precision, recall, AUC, F-measure, true (false) positive rate, prediction accuracy)

Description:

J48 [QUI93] implements Quinlan‟s C4.5 algorithm [QUI92] for generating a pruned or unpruned C4.5 decision
tree. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by J48 can be used
for classification. J48 builds decision trees from a set of labeled training data using the concept of information
entropy. It uses the fact that each attribute of the data can be used to make a decision by splitting the data into
smaller subsets. J48 examines the normalized information gain (difference in entropy) that results from
choosing an attribute for splitting the data. To make the decision, the attribute with the highest normalized
information gain is used. Then the algorithm recurs on the smaller subsets. The splitting procedure stops if all
instances in a subset belong to the same class. Then a leaf node is created in the decision tree telling to
choose that class. But it can also happen that none of the features give any information gain. In this case J48
creates a decision node higher up in the tree using the expected value of the class.
J48 can handle both continuous and discrete attributes, training data with missing attribute values and
attributes with differing costs. Further it provides an option for pruning trees after creation.
For further information, we refer to the original publications [QUI93].

Background (publication date, popularity/level of familiarity, rationale of approach, further comments)
Published in 1993. Implementation of the well-known C4.5 decision tree learner.
Extends C4.5 by providing besides C4.5pruning reduced error pruning.

Bias (instance-selection bias, feature-selection bias, combined instance-selection/feature-selection bias, independence assumptions?, ...)
Feature-selection bias

Lazy learning/eager learning
Eager learning

Interpretability of models (black box model?, ...)
Good (produced is a decision tree)

Type of Descriptor:

Interfaces:

Priority: High

Development status:

Homepage:

Dependencies:
External components: WEKA


Technical details

Data: No

Software: Yes

Programming language(s): Java

Operating system(s): Linux, Win, Mac OS

Input format: Weka's ARFF format

Output format: Plain text, model binary

License: GPL


References

References:
[QUI92] Ross J. Quinlan: Learning with Continuous Classes. In: 5th Australian Joint Conference on Artificial Intelligence, Singapore, 343-348, 1992.
[QUI93] Ross Quinlan (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA.

Document Actions