Consider the operation of adding a new element to the vector implemented as a flexible array. The best case is when the size of the vector is less than its capacity because adding a new element amounts to putting it in the first available cell. The cost of adding a new element is thus O(1). The worst case is when size equals capacity, in which case there is no room for new elements. In this case, new space must be allocated, the existng elements are copied to the new space, and only then can the new element be added to the vector. The cost of adding a new element is O(size(vector)). It is clear that the latter situation is less frequent than the former, but this depends on another parameter, capacity increment, which refers to how much the vector is increased when overflow occurs. In the extreme case, it can be incremented by just one cell, so in the sequence of m consecutive insertions, each insertion causes overflow and requires O(size(vector)) time to finish. Clearly, this situation should be delayed. One solution is to allocate, say, 1 million cells for the vector, which in most cases does not cause an overflow, but the amount of space is excessively large and only a small percentage of space allocated for the vector may be expected to be in actual use. Another solution to the problem is to double the space allocated for the vector if overflow occurs. In this case, the pessimistic O(size(vector)) performance of the insertion operation may be expected to occur only infrequently. By using this estimate, it may be claimed that, in the best case, the cost of inserting m items is O(m), but it is impossible to claim that, in the worst case, it is O(m # size(vector)). Therefore, to see better what impact this performance has on the sequence of operations, the amortized analysis should be used.