We can
see the medium instance achieves better transcoding performance
with larger computation capacity, as compared to the
small instance.
The latency with the micro instance is unexpectedly
large when the burst interval is longer than 60 seconds,
and as such we need not collect the latencies for even
longer burst intervals.
However, the micro instance performs
even better than the small instance with smaller burst intervals
due to more CPU power (Amazon claims the micro instance has
“UP to 2 ECUs”.) The reason lies in memory thrashing on the
micro instance (it has a smaller memory than other instances),
when burst transmission intervals are longer than 60 seconds,
when memory becomes the bottleneck. In case of the medium
instance, we also find that the startup latency with 100-second
burst intervals is smaller than that with 90-second bursts.
We believe that it is caused by the overheads of load balancing between
its two cores.
This shows that the performance can be
improved by a more efficient transcoding algorithm targeting at
multi-core platforms, which will be part of our future work.