As hard drives get denser, the cost of raw storage is getting ridiculously cheap – well under a dollar per gigabye as I write. The cost of managed storage, however, is an entirely different story.

Managed storage is the kind required for “enterprise applications”, i.e. when money is involved. It builds on raw storage by adding redundancy, the ability to hot-swap drives, to add capacity without disruption. In the higher-end of the market, additional manageability features include fault tolerance, the ability to take “snapshots” of data for backup purposes, and to mirror data remotely for disaster recovery purposes.

Traditionally, managed storage has been more expensive than raw disk by a factor of at least two, sometimes even an order of magnitude or more. When I started my company in 2000, for instance, we paid $300,000, almost half of our initial capital investment, for a pair of clustered Network Appliance F760 filers, with a total disk capacity of 600GB or so ($500/GB, when disk drives would cost $10/GB at the time). The investment was well worth it, as these machines have proven remarkably reliable, and the Netapps’ instant snapshot capability is vital for us, as it allows us to take instantaneous snapshots of our Oracle databases, which we can then back up in a leisurely backup window, without having to keep Oracle in the performance-sapping backup mode during that time.

Web serving workloads and the like can easily be distributed across farms of inexpensive rackmount x86 servers, an architecture pioneered by ISPs. Midrange servers (up to 4 processors), pretty much commodities nowaday, are adequate for all but the very highest transaction volume databases. Storage and databases are the backbone of any information system, however, and a CIO cannot afford to take any risks with them, that is why storage represents such a high proportion of hardware costs for most IT departments, and why specialists like EMC have the highest profit margins in the industry.

Most managed storage is networked, i.e. does not consist of hard drives directly attached to a server, but instead of disks attached to a specialized storage appliance connected to the server with a fast interconnect. There are two schools:

  • Network-Attached Storage (NAS), like our Netapps, that basically serve act as network file servers using common protocols like NFS (for UNIX) and SMB (for Windows). These are more often used for midrange applications and unstructured data, and connect using inexpensive Ethernet (Gigabit Ethernet, in our case) networks every network administrator is familiar with. NAS are available for home or small office use, at prices of $500 and up.
  • Storage Area Networks (SAN) offer a block-level interface (they behave like virtual hard drives that serve fixed-size blocks of data, without any understanding of what is in them). They currently use Fibre Channel, a fast and low latency interconnect, that is unfortunately also terribly expensive (FC switches are over ten times more expensive than equivalent Gigabit Ethernet gear). The cost of setting up a SAN usually limits them to high-end, mainframe-class data centers. Exotic cluster filesystems or databases like Oracle RAC need to be used if multiple servers are going to access the same data.

One logical way to lower the cost of SANs is to use inexpensive Ethernet connectivity. This was recently standardized as iSCSI, which is essentially SCSI running on top of TCP/IP. I recently became aware of Ximeta, a company that makes external drives that apparently implement iSCSI, at a price that is very close to that of raw disks (since iSCSI does not have to manage state for clients the way a more featured NAS does, Ximeta can shun expensive CPUs and RAM, and use a dedicated ASIC instead).

The Ximeta hardware is not a complete solution, and the driver software manages the metadata for the cluster of networked drives, such as the information that allows multiple drives to be concatenated to add capacity while keeping the illusion of a single virtual disk. The driver is also responsible for RAID, although Windows, Mac OS X and Linux all have volume managers capable of this. There are apparently some Windows-only provisions to allow multiple computers to share a drive, but I doubt they constitute a full-blown clustered filesystem. There are very few real-world cases in the target market where anything more than a cold standby is required, and it makes a lot more sense to designate one machine to share a drive for the others in the network.

I think this technology is very interesting and has the potential to finally make SANs affordable for small businesses, as well as for individuals (imagine extending the capacity of a TiVo by simply adding networked drives in a stack). Disk-to-disk Backups are replacing sluggish and relatively low-capacity tape drives, and these devices are interesting for that purpose as well.