In just a few short years storage virtualisation, also known as block virtualisation, has proven its worth in the large enterprise, having travelled that well-worn path from pricey boutique solution to affordable commodity. As a standard feature in all but the most modest mid-tier storage arrays, storage virtualisation soothes a wide range of storage management woes for small and mid-size organisations. At the same time, dedicated solutions from top-tier vendors deliver the greatest ROI to large shops managing large SANs with intense data availability requirements.
Storage virtualisation creates an abstraction layer between host and physical storage that masks the idiosyncrasies of individual storage devices. When implemented in a SAN, it provides a single management point for all block-level storage. To put it simply, storage virtualisation pools physical storage from multiple, heterogeneous network storage devices and presents a set of virtual storage volumes for hosts to use.
In addition to creating storage pools composed of physical disks from different arrays, storage virtualisation provides a wide range of services, delivered in a consistent way. These stretch from basic volume management, including LUN (logical unit number) masking, concatenation, and volume grouping and striping through to thin provisioning, automatic volume expansion and automated data migration, to data protection and disaster recovery functionality, including snapshots and mirroring. In short, virtualisation solutions can be used as a central control point for enforcing storage management policies and achieving higher SLAs.
Perhaps the most important service enabled by block-level virtualisation is non-disruptive data migration. For large organisations, moving data is a near-constant fact of life. As old equipment comes off lease and new gear is brought online, storage virtualisation enables the migration of block-level data from one device to another without an outage. Storage administrators are free to perform routine maintenance or replace aging arrays without interfering with applications and users. Production systems keep chugging along.
Four architectural approaches
In a virtualised SAN fabric, there are four ways to deliver storage virtualisation services: in-band appliances, out-of-band appliances, a hybrid approach called split path virtualisation architecture and controller-based virtualisation. Regardless of architecture, all storage virtualisation solutions must do three essential things: maintain a map of virtual disks and physical storage, as well as other configuration metadata; execute commands for configuration changes and storage management tasks; and of course transmit data between hosts and storage. The four architectures differ in the way they handle these three separate paths or streams – the metadata, control and data paths – in the I/O fabric. The differences hold implications for performance and scalability.
An in-band appliance processes the metadata, control, and data path information all in a single device. In other words, the metadata management and control functions share the data path. This represents a potential bottleneck in a busy SAN, because all host requests must flow through a single control point. In-band appliance vendors have addressed this potential scalability issue by adding advanced clustering and caching capabilities to their products. Many of these vendors can point to large enterprise SAN deployments that showcase their solution's scalability and performance. Examples of the in-band approach include DataCore SANsymphony, FalconStor IPStor, and IBM SAN Volume Controller.
An out-of-band appliance pulls the metadata management and control operations out of the data path, offloading these to a separate compute engine. The hitch is that software agents must be installed on each host. The job of the agent is to pluck the metadata and control requests from the data stream and forward them to the out-of-band appliance for processing, freeing the host to focus exclusively on transferring data to and from storage. The sole provider of an out-of-band appliance is LSI Logic, whose StoreAge product can be adapted to both out-of-band or split path usage.
A split path system leverages the port-level processing capabilities of an intelligent switch to offload the metadata and control information from the data path. Unlike an out-of-band appliance, in which the paths are split at the host, split path systems split the data and the control paths in the network at the intelligent device. Split path systems forward the metadata and control information to an out-of-band compute engine for processing and pass the data path information on to the storage device. Thus, split path systems eliminate the need for host-level agents.
Typically, split path virtualisation software will run in an intelligent switch or a purpose built appliance. Providers of split path virtualisation controllers are EMC (Invista), Incipient and LSI Logic (StoreAge SVM).
Array controllers have been the most common layer where virtualisation services have been deployed. However, controllers typically have virtualised only the physical disks internal to the storage system. This is changing. A twist on the old approach is to deploy the virtualisation intelligence on a controller that can virtualise both internal and external storage. Like the in-band appliance approach, the controller processes all three paths: data, control and metadata. The primary example of this new style of controller-based virtualisation is Hitachi Universal Storage Platform.
Just as block virtualisation simplifies SAN management, file virtualisation eliminates much of the complexity and limitations associated with enterprise NAS systems. We all recognise that the volume of unstructured data is exploding and that IT has little visibility into or control over that data. File virtualisation offers an answer.
File virtualisation abstracts the underlying specifics of the physical file servers and NAS devices and creates a uniform namespace across those physical devices. A namespace is simply a fancy term referring to the hierarchy of directories and files and their corresponding metadata. Typically with a standard file system such as NTFS, a namespace is associated with a single machine or file system. By bringing multiple file systems and devices under a single namespace, file virtualisation provides a single view of directories and files and gives administrators a single control point for managing that data.
Many of the benefits will sound familiar. Like storage virtualisation, file virtualisation can enable the non-disruptive movement and migration of file data from one device to another. Storage administrators can perform routine maintenance of NAS devices and retire old equipment without interrupting users and applications.
File virtualisation, when married with clustering technologies, also can dramatically boost scalability and performance. A NAS cluster can provide several orders of magnitude faster throughput (MBps) and IOPS than a single NAS device. High performance computing applications, such as seismic processing, video rendering, and scientific research simulations, rely heavily on file virtualisation technologies to deliver scalable data access.
Three architectural approaches
File virtualisation is still in its infancy. As always, different vendors' approaches are optimally suited for different usage models, and no one size fits all. Broadly speaking, you'll find three different approaches to file virtualisation in the market today: Platform-integrated namespaces, clustered-storage derived namespaces and network-resident virtualised namespaces.
Platform-integrated namespaces are extensions of the host file system. They provide a platform-specific means of abstracting file relationships across machines on a specific server platform. These types of namespaces are well suited for multisite collaboration, but they tend to lack rich file controls and of course they are bound to a single file system or OS. Examples include Brocade StorageX, NFS v4, and Microsoft Distributed File System (DFS).
Clustered storage systems combine clustering and advanced file system technology to create a modularly expandable system that can serve ever-increasing volumes of NFS and CIFS requests. A natural outgrowth of these clustered systems is a unified, shared namespace across all elements of the cluster. Clustered storage systems are ideally suited for high performance applications and to consolidate multiple file servers into a single, high-availability system. Vendors here include Exanet, Isilon, Network Appliance (Data ONTAP GX) and HP (PolyServe).
Network-resident virtualised name-spaces are created by network-mounted devices (commonly referred to as network file managers) that reside between the clients and NAS devices. Essentially serving as routers or switches for file-level protocols, these devices present a virtualised namespace across the file servers on the back end and route all NFS and CIFS traffic as between clients and storage. NFM devices can be deployed in band (F5 Networks) or out of band (EMC Rainfinity). Network-resident virtualised namespaces are well suited for tiered storage deployments and other scenarios requiring non-disruptive data migration.
File and block storage virtualisation may be IT's best chance of alleviating the pain associated with the ongoing data tsunami. By virtualising block and file storage environments, IT can gain greater economies of management and implement centralised policies and controls over heterogeneous storage systems. The road to adoption of these solutions has been long and difficult, but these technologies are finally catching up to our needs. You will find the current crop of file and block virtualisation solutions to be well worth the wait.