Many managers, especially those outside of software quality, have a simplistic view of test
automation.
Test Automation is more than a set of tests run to generate apparent results.
It includes designing testware, implementing automated test cases, and monitoring and interpreting
a broad range of results.
Automation by simply running test cases without human interaction doesn’t provide interesting test exercises.
We must know how the SUT reacts before the exercise can become a useful test.
In fact, automating the running of tests generates data much more quickly than manual testing, and therefore there is more data to sift through before knowing how the SUT responded during the tests. This requires more time from testers and can result in less effective tests.
Elements besides the SUT output can also be directly effected by the SUT, and these need to be evaluated in automated tests as well.