XYZ Company currently has two datacenters. The first is a completely redundant datacenter with
secured power and backup power along with sufficient network capabilities to handle daily replication
of important data to an offsite location. It contains all of the standard network infrastructure (1gb
host/10gb backbone) along with a large Internet connection (5gb/s). The second “datacenter” was built
in the back of a flagship XYZ Company store in the 1990s (Store #003). It is built on a raised floor with
proper cooling. While it does not have redundant power, it does have sufficient bandwidth to handle
data replication. It also contains the appropriate network infrastructure to handle workloads. This
location is 100 miles from the primary datacenter location.
Due to the pricing and analytical nature of XYZ Company’s cost competitive business, it maintains a
significant amount of data to analyze price, advertising, Web, and historical sales information. This is
currently over 500 TB of data. Other business ERP and order data comprise more than 100 TB. There is
roughly another 400 TB of unstructured data in the environment, housed on Windows file shares (SAN based). The sprawl of storage is consuming a large part of budget on a year-over-year basis and must be
controlled, while improving I/O (retrieval) response times (for the most critical ERP/database apps). XYZ
Company does not want to lose customers due to slower response times to applications.
Today, only the main database infrastructure for order management (Web and store) is secured with
high availability native to the operating system. This was built and implemented by a vendor and not
changed or tested in a number of years. Only active order data is replicated to the secondary site. All
recovery is done manually, which must change to meet the Board of Director requirements. This is a
major gap in the XYZ Company environment.