SANs benefit from the ability to move file traffic off the main network. The
network can be tuned for the file service’s particular needs: low latency and
high speed. The SAs is isolated from other networks, which gives it a security
advantage.
Sites were building their own versions of SANs long before anyone knew
to call them that, using multiple fiber-optic interfaces on key fileservers and
routing all traffic via the high-speed interfaces dedicated to storage. Christine
and Strata were coworkers at a site that was an early adopter of this concept.
The server configurations had to be done by hand, with a bit of magic in the
automount maps and in the local host and DNS entries, but the performance
was worth it.
SANs have been so useful that people have started to consider other
ways in which storage devices might be networked. One way is to treat other
networks as if they were direct cabling. Each SCSI command is encapsulated
in a packet and sent over a network. Fibre channel (FC) does this using copper
or fiber-optic networks. The fibre channel becomes an extended SCSI bus,
and devices on it must follow normal SCSI protocol rules. The success of
fibre channel and the availability of cheap, fast TCP/IP network equipment
has led to creation of iSCSI, sending basically the same packet over an IP
network. This allows SCSI devices, such as tape libraries, to be part of a SAN
directly. ATA over Ethernet (AoE) does something similar for ATA-based
disks.
With advances in high-speed networking and the affordability of the
equipment, protocol encapsulations requiring a responsive network are now
feasible in many cases. We expect to see the use of layered network storage
protocols, along with many other types of protocols, increase in the future.
Since a SAN is essentially a network with storage, SANs are not limited
to one facility or data center. Using high-speed networking technologies,
such as ATM or SONET, a SAN can be “local” to multiple data centers at
different sites.