Yasutaka Kamei, et al. [1] have empirically validated their prediction model on a number of large software projects, both under open source as well as commercial categories. They report that on average, 64% of defect-inducing changes are actually detected (recall) with an accuracy of 68% and a precision of 34% in both the categories. Here recall is defined as “the ratio of correctly predicted defect-inducing changes to the total number of defect inducing changes;” and accuracy is defined as “the ratio of correctly classified changes (both defect-inducing and nondefect-inducing) with respect to the total number of changes.” Precision is defined as “the ratio of correctly predicted defect-inducing changes to all changes predicted as defect-inducing.” The advantages of just-in-time quality assurance predictions are threefold. First, they are fine-grained (code-level changes as against file-level or package-level changes are identified, and the files or packages may include many such code segments); second, they identify the specific developer to whom the code review is to be assigned; and third, the predictions are made just-intime, when the changes incorporated are fresh in the minds of the developers.