I. INTRODUCTION
The practical component of many Computer Science and Software Engineering courses requires students to write computer programs and submit them for assessment. Student programs can themselves be analysed by programs that evaluate the construction and behaviour of the submitted code. The output from analysis tools can be used to provide formative feedback to students on ways to improve their work. Analysis tools can also be used by instructors for the automatic assessment of many quality attributes of submitted student assignments. This paper investigates the use of professional software engineering tools to provide formative feedback and assessment of software quality for student programming assignments.
Software engineering (SE) educators aim for their students
to learn how to deliver high quality, tested programs, to write readable, well documented programs, and to know how to use a programming language such as Java effectively. The process that ensures that programs meet these standards is called software quality assurance (SQA). Although SQA is stressed in lectures and in programming assignment specifications, our experience is that conformance to these standards is often not formally or thoroughly assessed. That is, courses have no explicit model for SQA, nor explicit metrics for assessing the quality of student assignments. As a result, the programs of many students fall well short of intended quality standards. However, we believe that given feedback about the quality limitations of their programs, students could improve significantly the overall quality of their work. Furthermore, feedback tools also enable automatic assessment of student programs.