with each other to assign storage keys, each node
maintains a locally unique counter to which it appends its
local site id to generate a globally unique storage key.
Keys in the WS will be consistent with RS storage keys
because we set the initial value of this counter to be one
larger than the largest key in RS.
We are building WS on top of BerkeleyDB [SLEE04];
we use the B-tree structures in that package to support our
data structures. Hence, every insert to a projection results
in a collection of physical inserts on different disk pages,
one per column per projection. To avoid poor
performance, we plan to utilize a very large main memory
buffer pool, made affordable by the plummeting cost per
byte of primary storage. As such, we expect “hot” WS
data structures to be largely main memory resident.
C-Store’s processing of deletes is influenced by our
locking strategy. Specifically, C-Store expects large
numbers of ad-hoc queries with large read sets
interspersed with a smaller number of OLTP transactions
covering few records. If C-Store used conventional
locking, then substantial lock contention would likely be
observed, leading to very poor performance.