5. EXPERIMENTS AND EVALUATION
In this section, we present the experiments and evaluation that we undertook in order to quantify
the efficiency of CloudSim in modeling and simulation of Cloud computing environments.
5.1. CloudSim: scalability and overhead evaluation
The first tests that we present here are aimed at analyzing the overhead and scalability of memory
usage, and the overall efficiency of CloudSim. The tests were conducted on a machine that had
two Intel Xeon Quad-core 2.27 GHz and 16GB of RAM memory. All of these hardware resources
were made available to a VM running Ubuntu 8.04 that was used for running the tests.
The test simulation environment setup for measuring the overhead and memory usage by
CloudSim included DataCenterBroker and DataCenter (hosting a number of machines) entities.
In the first test, all the machines were hosted within a single data center. Then for the next test,
the machines were symmetrically distributed across two data centers. The number of hosts in
both the experiments varied from 1000 to 1 000 000. Each experiment was repeated 30 times.
For the memory test, the total physical memory usage required for fully instantiating and loading
the CloudSim environment was profiled. For the overhead test, the total delay in instantiating the
simulation environment was computed as the time difference between the following events: (i) the
time at which the run-time environment (Java VM) is instructed to load the CloudSim framework;
and (ii) the instance at which CloudSim’s entities are fully initialized and are ready to process
events.
Figure 10(a) presents the average amount of time that was required for setting up simulation as
a function of several hosts considered in the experiment. Figure 10(b) plots the amount of memory
that was required for successfully conducting the tests. The results showed that the overhead
does not grow linearly with the system size. Instead, we observed that it grows in steps when a
specific number of hosts were used in the experiment. The obtained results showed that the time to
instantiate an experiment setup with 1 million hosts is around 12 s. These observations proved that
CloudSim is capable of supporting a large-scale simulation environment with little or no overhead
as regard initialization time and memory consumption. Hence, CloudSim offers significant benefits
as a performance testing platform when compared with the real-world Cloud offerings. It is almost
impossible to compute the time and economic overhead that would incur in setting up such a
large-scale test environment on Cloud platforms (Amazon EC2, Azure). The results showed almost
the same behavior under different system sizes (Cloud infrastructure deployed across one or two
data centers). The same behavior was observed for the cases when only one and two data centers
were simulated although the latter had averages that were slightly smaller than the former. This
difference was statically significant (according to unpaired t-tests run with samples for one and
two data centers for each value of number of hosts), and it can be explained with the help of an
efficient use of a multicore machine by the Java VM.
As regards memory overhead, we observed that a linear growth with an increase in the number
of hosts and the total memory usage never grew beyond 320MB even for larger system sizes. This
result indicated an improvement in the performance of the recent version of CloudSim (2.0) as
compared with the version that was built based on SimJava simulation core [20]. The earlier version
incurred an exponential growth in memory utilization for experiments with similar configurations.
The next test was aimed at validating the correctness of functionalities offered by CloudSim.
The simulation environment consisted of a data center with 10 000 hosts where each host was
modeled to have a single CPU core (1200 MIPS), 4GB of RAM memory, and 2 TB of storage. The
provisioning policy for VMs was space-shared that allowed one VM to be active in a host at a given
instance of time. We configured the end-user (through the DatacenterBroker) to request creation
and instantiation of 50VMs that had the following constraints: 1024MB of physical memory, 1
CPU core, and 1GB of storage. The application granularity was modeled to be composed of 300
task units, with each task unit requiring 1 440 000 million instructions (20 min in the simulated
hosts) to be executed on a host. Since networking was not the focus of this study, therefore minimal
data transfer (300 kB) overhead was considered for the task units (to and from the data center).
After the creation of VMs, task units were submitted in small groups of 50 (one for each VM)
at an inter-arrival delay of 10 min. The VMs were configured to apply both space-shared and timeshared
policies for provisioning tasks units to the processing cores. Figures 11(a) and (b) present
task units’ progress status with the increase in simulation steps (time) for multiple provisioning
policies (space-shared and time-shared). As expected, in the space-shared mode, every task took
20 min for completion as they had dedicated access to the processing core. In space-shared mode,
the arrival of new task did not have any effect on the tasks under execution. Every new task was
simply queued in for future consideration. However, in the time-shared mode, the execution time
of each task varied with an increase in the number of submitted task units. Time-shared policy for
allocating task units to VMs had a significant effect on execution times, as the processing core was
massively context switched among the active tasks. The first group of 50 tasks had a slightly better
response time as compared with the latter groups. The primary cause for this being that the task
units in the latter groups had to deal with comparatively an over-loaded system (VMs). However,
at the end of the simulation as system became less loaded, the response times improved (see
Figure 11). These are the expected behaviors for both policies considering the experiment input.
Hence, the results showed that policies and components of CloudSim are correctly implemented.