Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Evaluating IntelliWrite In Your Environment

by Damian 1. March 2012 10:18

IntelliWrite technology has been around for about two years now, optimizing literally millions of systems worldwide. It seamlessly integrates with Windows, delivering optimized writes upon initial I/O (no need for additional, after-the-fact file movement). What does that translate to? Actual fragmentation prevention.

Interestingly, we do occasionally get asked how it bears up against modern storage technologies:

“Don’t the latest SANs optimize themselves?”

“Do I really need this on my VMs? They aren’t physical hard drives, you realize…”

Or even…

“I don’t need to defragment my SAN-hosted VMs.”

Now, there are some factors which must be considered when you’re looking at optimizing I/O in your infrastructure:

  • I/O from Windows is just abstracted Reads and Writes from a higher layer, even directly over a bare metal disk.
  • Due to the way current Windows file systems are structured, I/O can be greatly constrained by file fragmentation—no matter what storage lies underneath it.
  • Fragmentation in Windows means more I/O requests from Windows—even if files are stored perfectly contiguously at the SAN level, Windows still has to send X amount of requests because of the fragmentation that it sees within its top level.
  • File fragmentation is not the same as block-level (read: SAN-level) fragmentation. Many SAN utilities resolve issues of block-level fragmentation admirably; these do not address file fragmentation.
  • Finally, and as noted above, IntelliWrite prevents fragmentation in real time by improving Windows “Best Fit” file write logic. This means solving file fragmentation issues with no additional writes which could create issues with SAN de-dup or various copy-on-write data redundancy measures.

We performed testing with a customer recently in order to validate the benefits of IntelliWrite over cutting-edge storage. This customer’s SAN array is less than a year old, and while we don’t want to go into specifics in order to avoid seeming partial, it’s from one of today’s leading SAN vendors.

Testing involved apples to apples comparison on a production VM hosted over the SAN. A non-random workload was generated 3 times, recording Windows-level file fragmentation, several PerfMon metrics, and time to complete the workload. The test was then repeated 3 times, now with IntelliWrite enabled on the same VM’s test volume.

Here were the results:



The breakdown:

Fragmentation reduction with IntelliWrite: 89%

Split IO/sec reduction with IntelliWrite: 81%

Avg. Disk Queue Length reduction with IntelliWrite: 71%

…and with the improvement to these disk performance metrics, the overall time to complete the same actual file operations was reduced by: 48%

The conclusion? If you were asking the same sorts of questions posed earlier, evaluate IntelliWrite for yourself. Remember, the graphs above are on contemporary storage hardware—the older your storage equipment, the greater the improvement in application performance you can expect from investing in optimization. Can you afford to not be seeing maximum performance numbers out of your infrastructure and application investments?

The evaluation is quick and fully transparent. Call today to speak with a representative about evaluating Diskeeper or V-locity in your environment.

Tags: , ,

Diskeeper | IntelliWrite | SAN | V-Locity

Big News! Diskeeper Corporation and SanDisk Enter Into Strategic Partnership

by Damian 21. February 2012 07:56

Diskeeper Corporation is pleased to announce that we have recently entered into a worldwide, exclusive agreement with SanDisk. SanDisk will license Diskeeper's industry-leading caching software solutions for solid state disk drives (SSDs). SanDisk will provide these solutions both as standalone software products as well as bundled with SanDisk's SSD products for client computing applications.

Here's what Diskeeper's CEO had to say: "We see our alliance with SanDisk as a critical driver to accelerate adoption of SSD computing applications. The exceptional performance and endurance of SanDisk's SSDs paired with Diskeeper's ExpressCache and NowOn products offer consumer OEM customers industry-leading performance optimization for Ultrabooks and other computer platforms."

Check out the full article from the NY Times here.


Webinar: Physical vs. Virtual Bottlenecks: What You Really Need To Know

by Damian 20. February 2012 07:05

Diskeeper Corporation recently delivered a live webinar hosted by Ziff Davis Enterprise. The principle topics covered were:

  • Measuring performance loss in Windows over SAN
  • Identifying client-side performance bottlenecks in private clouds
  • Expanding performance awareness to the client level
  • The greatest and often-overlooked performance issue in a virtual ecosystem

The webinar was co-hosted by:

  • Stephen Deming, Microsoft Partner Solution Advisor
  • Damian Giannunzio, Diskeeper Corporation Field Sales & Application Engineer

Don't miss out on this critical data! If you missed the webinar, you can view the recorded version online here.

Here are some additional, relevant resources:

White Paper: Diskeeper 2011: Improving the Performance of SAN Storage

White Paper: Increasing Efficiency in the IT Environment

White Paper: Inside Diskeeper 2011 with IntelliWrite

White Paper: Running Diskeeper and V-locity on SAN Devices 

Storage Abstraction, and What it Means to You

by Damian 22. November 2011 04:46

I felt compelled to write a little bit about this subject after reading recently about some new updates to software SANs. The glamour of the virtual platform layers and Cloud have somewhat overshadowed all of the virtualization already occurring within storage, and the extra levels that are added “below decks”. It’s a topic meriting some scrutiny from any storage administrator committed to high performance.

Outside of the physical data store itself, every element of the I/O path above it is virtual. It should also be noted that at essentially each step along this I/O path, infrastructure customization and proprietary technologies can (and often do) vary or add new virtual layers to the process. All of these logical abstractions have evolved from various sources in the storage ecosystem in order to drive scalability and agile responses to disaster or growth.

Let’s consider a common hypothetical path that an I/O request takes from a Windows client VM to the physical data store in a modern infrastructure. In this example, the storage for the client VM is a virtual RAID 5 configured of LUNs from a SAN. An I/O request originating at the top OS level, Windows in this case, will go through these underlying levels before getting to the actual physical storage device. Windows > volume manager > virtual RAID > SAN LUN > physical store (with the possibility of additional abstraction levels based on storage customization).

Based upon how the storage infrastructure has been established in this scenario, there is a virtual RAID 5 implemented above the SAN LUN layer. That being the case, the volume manager directs the request to the virtual RAID beneath it. Due to Striping, I/O at this stage can end up fractured (intentionally) by the RAID, based on how the array has been provisioned. The I/O Path has now become distributed, and may be even further replicated on its way to physical storage.

The RAID sends its request to the SAN LUN below, another abstraction from the physical storage itself slicing the store into basic logical units. The SAN LUN layer completes the request directly to the physical storage. The data is then returned along the same route to the requester.

Now, numerous solutions exist for managing communication and throughput within this data pipeline. Administrators can tailor their RAID presentation, ensure partition alignment, upgrade the underlying hardware, even add new software abstraction layers intended to organize data better at lower levels. However, an interesting concept emerges after review.

None of these solutions handle the most basic, and one of the most critical vulnerabilities in the existing ecosystem’s performance: assuring that the file request is as sequential and rapid as possible at the point of origin. Whether virtualized as is so common today or installed over direct-attached storage, Windows Read and Write performance is degraded by file and free space fragmentation at this top level as it causes more I/O requests to occur. Each request through all of the abstraction layers greets its first bottleneck at the outset, in how contiguous the file arrangement is within Windows. Optimizing reads and writes at this upper level helps ensure in most cases the fastest I/O path no matter how much or how little storage abstraction has been structured beneath.


Fragmentation on a SAN

In a recent white paper, Diskeeper Corporation tested a variety of I/O metrics over SAN storage with and without file fragmentation being intelligently prevented and handled. In one such test, Iometer (an open source I/O measurement tool) displayed over a 200% improvement in IOPS (I/Os per second) after Diskeeper 2011 had handled Windows volume fragmentation. Testing was performed on a SAN connected to a Windows Server 2008 R2 virtual host:

SAN Fragmentation Test Results

You can read the entire white paper here: 


Space Reclamation, Above and Below

by Damian 7. November 2011 09:29

Thin provisioning is a fairly hot topic in the storage arena, and with good reason. Many zones within the business and enterprise see massive benefit from the scalability of thin provisioning, and it can be a cost saver besides. However, the principle of thin provisioning suffers some unique maladies at both client and storage levels.

Some storage arrays include a feature permitting thin provisioning for their LUNs. This storage layer thin provisioning occurs below the virtual platform storage stack, and essentially means scalable datastores. Horizontal scaling of data stores adds a new tier of agility to the storage ecosystem that some businesses absolutely require.

LUN thin provisioning shouldn’t be confused with Virtual Disk TP, which works at a file level (not array). Thin provisioned VMs can expand based on pre-determined use cases, adding an extra degree of flexibility to storage density. Intelligently combining TP at multiple tiers yields some pretty neat capacity results.

Datastore thin provisioning has been the source of some concern for storage administrators with regards to recovery from over-provisioning. When virtual disks are deleted or copied away from a datastore, the array itself is not led to understand that those storage blocks are now free. You can see how this can lead to needless storage consumption.

vSphere 5 from VMware introduced a solution for this issue. The new vSphere Storage APIs for Array Integration (VAAI) for TP uses the SCSI UNMAP command to tell the storage array that space previously occupied by a VM can be reclaimed. This addresses one aspect of the issue with thin VM growth.

Files are not simply being written to a virtual disk, they’re also deleted with regularity. Unfortunately, there is no associated feature within virtual platforms or Windows to inform the storage array that blocks can be recovered from a thin disk which should have contracted after deletions. Similar to the issue above, this leads to unnecessary storage waste.

With the release of V-locity 3 in 2011, we introduced a new Automatic Space Reclamation engine. This engine automatically zeroes out “dead” free space within thin virtual disks, without requiring that they be taken offline and with no impact on resource usage. So what does this mean? Thin VMs can be compacted, actually reclaiming the deleted space to the storage array for dynamic use elsewhere. The thin virtual disks themselves are kept slimmed down within datastores, giving more control back to the storage admins governing provisioning.

Space Reclamation with V-locity

You can read more about VAAI for TP in vSphere 5 on the VMware blog here.

Tags: , , ,

virtualization | VMware | Windows 7


Comment RSS

Month List


<<  February 2020  >>

View posts in large calendar