Decision trees are one kind of inductive learning algorithms that offer an efficient and practical method for generalizing classification rules from previous concrete examples. In decision tree induction the entire data in the training set is used as root node for the tree. Then the root node is split into several sub-nodes depending upon some splitting criterion. The process of splitting sub-node continues, till all leaf nodes are generated else if all the instances in the sub-node belong to the same class. The different variation of decision trees can be generated depending upon two main parameters, first is splitting criterion used and the second is pruning method involved. The heuristic function used can be Gini index, Entropy, Information gain, Gain ratio and large margin heuristic. ID3 [7] is one of the benchmark decision tree algorithm proposed by Qunilan in 1986, on which many researches are done, and improvements are suggested for the general cases as will as specific subset of the applicable data. One of the shortcomings of ID3’s incling to choose attributes to split with many values (outcomes). This shortcoming will result in decrease of performance and generalization. In Health care domain, even a small gain in performance and generalization of the algorithm will result in much easier analysis of the results. In C4.5 [6] the heuristic function used to split the data is gain ratio. It computes a weight of a feature without examining other available features. If the features are independent then the gain ratio will select improper feature for splitting the node. Features which are not so relevant to the classification class will receive a small weight but a large number of these irrelevant features might still overrule more important features. These two problems will have a negative influence on the classification accuracy.