C. Automated Performance Test Scripting Approaches
Typically, performance testing is accomplished using test
scripts, which are programs that test engineers write to
automate testing. These test scripts performs actions (i.e.,
invoking methods of exposed interfaces or mimicking user
actions on GUI objects of the AUT) to feed input data into
AUT and trigger computation. Test engineers write code
in test scripts that guide selection of test inputs; typically,
it is done using randomly selected input values or by
using algorithms of combinatorial design interactions [16].
It is impossible to performance test applications without
test scripts, since it is not feasible to engage hundreds of
thousands of testers who simulate multiple users who call
multiple methods with high frequency manually [17] [18]
[19] [1].
Test scripts are written with either of two different frameworks:
a GUI testing framework (e.g., QuickTestPro from
HP Corp) or a backend server-directed performance tool
such as JMeter, an open source software that is widely used
to load test functional behavior and measure performance of
applications. These frameworks are the basis on which performance
testing is mostly done in industry. Performance test
scripts imitate large numbers of users to create a significant
load on the AUT. JMeter provides programming constructs
that enable testers to automatically generate a large number
of virtual users who send HTTP requests directly to web servers of AUTs thereby creating significant workloads.
Natural measures of performance include throughput, that
is the number of executed requests per second and the
average response time it takes to execute a request. A goal
of performance testing is to determine what combinations of
requests lead to higher response times and lower throughput,
which are helpful to reveal performance bugs in AUTs.