Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Storage Abstraction, and What it Means to You

by Damian 22. November 2011 04:46

I felt compelled to write a little bit about this subject after reading recently about some new updates to software SANs. The glamour of the virtual platform layers and Cloud have somewhat overshadowed all of the virtualization already occurring within storage, and the extra levels that are added “below decks”. It’s a topic meriting some scrutiny from any storage administrator committed to high performance.

Outside of the physical data store itself, every element of the I/O path above it is virtual. It should also be noted that at essentially each step along this I/O path, infrastructure customization and proprietary technologies can (and often do) vary or add new virtual layers to the process. All of these logical abstractions have evolved from various sources in the storage ecosystem in order to drive scalability and agile responses to disaster or growth.

Let’s consider a common hypothetical path that an I/O request takes from a Windows client VM to the physical data store in a modern infrastructure. In this example, the storage for the client VM is a virtual RAID 5 configured of LUNs from a SAN. An I/O request originating at the top OS level, Windows in this case, will go through these underlying levels before getting to the actual physical storage device. Windows > volume manager > virtual RAID > SAN LUN > physical store (with the possibility of additional abstraction levels based on storage customization).

Based upon how the storage infrastructure has been established in this scenario, there is a virtual RAID 5 implemented above the SAN LUN layer. That being the case, the volume manager directs the request to the virtual RAID beneath it. Due to Striping, I/O at this stage can end up fractured (intentionally) by the RAID, based on how the array has been provisioned. The I/O Path has now become distributed, and may be even further replicated on its way to physical storage.

The RAID sends its request to the SAN LUN below, another abstraction from the physical storage itself slicing the store into basic logical units. The SAN LUN layer completes the request directly to the physical storage. The data is then returned along the same route to the requester.

Now, numerous solutions exist for managing communication and throughput within this data pipeline. Administrators can tailor their RAID presentation, ensure partition alignment, upgrade the underlying hardware, even add new software abstraction layers intended to organize data better at lower levels. However, an interesting concept emerges after review.

None of these solutions handle the most basic, and one of the most critical vulnerabilities in the existing ecosystem’s performance: assuring that the file request is as sequential and rapid as possible at the point of origin. Whether virtualized as is so common today or installed over direct-attached storage, Windows Read and Write performance is degraded by file and free space fragmentation at this top level as it causes more I/O requests to occur. Each request through all of the abstraction layers greets its first bottleneck at the outset, in how contiguous the file arrangement is within Windows. Optimizing reads and writes at this upper level helps ensure in most cases the fastest I/O path no matter how much or how little storage abstraction has been structured beneath.

 

Fragmentation on a SAN
 

In a recent white paper, Diskeeper Corporation tested a variety of I/O metrics over SAN storage with and without file fragmentation being intelligently prevented and handled. In one such test, Iometer (an open source I/O measurement tool) displayed over a 200% improvement in IOPS (I/Os per second) after Diskeeper 2011 had handled Windows volume fragmentation. Testing was performed on a SAN connected to a Windows Server 2008 R2 virtual host:

SAN Fragmentation Test Results
 

You can read the entire white paper here: http://downloads.diskeeper.com/pdf/improve-san-performance.pdf 

Tags:

Space Reclamation, Above and Below

by Damian 7. November 2011 09:29

Thin provisioning is a fairly hot topic in the storage arena, and with good reason. Many zones within the business and enterprise see massive benefit from the scalability of thin provisioning, and it can be a cost saver besides. However, the principle of thin provisioning suffers some unique maladies at both client and storage levels.

Some storage arrays include a feature permitting thin provisioning for their LUNs. This storage layer thin provisioning occurs below the virtual platform storage stack, and essentially means scalable datastores. Horizontal scaling of data stores adds a new tier of agility to the storage ecosystem that some businesses absolutely require.

LUN thin provisioning shouldn’t be confused with Virtual Disk TP, which works at a file level (not array). Thin provisioned VMs can expand based on pre-determined use cases, adding an extra degree of flexibility to storage density. Intelligently combining TP at multiple tiers yields some pretty neat capacity results.

Datastore thin provisioning has been the source of some concern for storage administrators with regards to recovery from over-provisioning. When virtual disks are deleted or copied away from a datastore, the array itself is not led to understand that those storage blocks are now free. You can see how this can lead to needless storage consumption.

vSphere 5 from VMware introduced a solution for this issue. The new vSphere Storage APIs for Array Integration (VAAI) for TP uses the SCSI UNMAP command to tell the storage array that space previously occupied by a VM can be reclaimed. This addresses one aspect of the issue with thin VM growth.

Files are not simply being written to a virtual disk, they’re also deleted with regularity. Unfortunately, there is no associated feature within virtual platforms or Windows to inform the storage array that blocks can be recovered from a thin disk which should have contracted after deletions. Similar to the issue above, this leads to unnecessary storage waste.

With the release of V-locity 3 in 2011, we introduced a new Automatic Space Reclamation engine. This engine automatically zeroes out “dead” free space within thin virtual disks, without requiring that they be taken offline and with no impact on resource usage. So what does this mean? Thin VMs can be compacted, actually reclaiming the deleted space to the storage array for dynamic use elsewhere. The thin virtual disks themselves are kept slimmed down within datastores, giving more control back to the storage admins governing provisioning.

Space Reclamation with V-locity

You can read more about VAAI for TP in vSphere 5 on the VMware blog here.

Tags: , , ,

virtualization | VMware | Windows 7

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar