It is well-known that maintenance and evolution efforts amount to as much as 90% of software development costs. According to statistics reported in the literature, the current size of the code base at Fortune 100 companies is 35 million lines of code, and this size is expected to double every seven years. SQA forms a large part of the code maintenance effort. Hence, there is a need for practical approaches to prioritize SQA efforts. Software Analytics dealing with mining of software repositories is an active area of research. The majority of ongoing research on prioritizing SQA efforts focus on “code-level quality” (predicting code locations that are error prone) and “change-level quality” (predicting software changes that are error prone). However, their penetration into practice is too low, considering the coarse granularity of these predictions (being made at the file level). The proposed research work by Emad Shihab [10] aims to resolve this situation by suggesting four principles for such predictions, viz., (i) focused finer levels of granularity; (ii) timely feedback, when changes are fresh in the minds of developers; (iii) provide an estimate of the SQA effort; and, (iv) evaluate general applicability of the predictions across different projects and domains. Research has progressed well already for the code-level quality, while similar progress is expected for the change-level quality in near future. The list of heuristics proposed for identifying code-level quality that identify specific function or method for which unit tests are to be carried out, is given below, viz., (i) “most frequently modified;” (ii) “most recently modified;” (iii) “most frequently fixed;” (iv) “most recently fixed;” (v) “largest modified;” (vi) “largest fixed;” (vii) “size risk” (functions with risk defined as the ratio of number of error fixes over the size of the function in lines of code); (viii) “change risk” (functions with risk defined as the ratio of number of error fixing changes over the total number of changes); and (ix) random selection of functions. The size (lines-of-code) of the function is suggested to estimate the effort needed to perform unit tests. A metric named usefulness is defined to indicate whether the predictions were really effective in finding new errors in the predicted functions.