Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

What is unnecessary I/O? Why does it exist?

by Brian Morin 5. November 2013 07:09

Modern IT infrastructures deal with enough I/O traffic as it is. The last thing they need is unnecessary I/O.

It's no surprise that IT struggles with performance problems caused by the tidal wave of data that travels back and forth across the infrastructure in the form of read and write I/O. Organizations that have virtualized find themselves in the position of trading more and more costs to the storage backend to keep up with I/O demand. The negative impact that virtualization has had on the storage layer is felt, that’s for sure, but it isn’t well understood.

With the proliferation of multiple VMs accessing the same bytes of data and the nature of the “I/O blender effect” that further randomizes I/O streams from multiple VMs before funneling down to storage, a large amount of I/O cycles are completely unnecessary. In a world where organizations are already crushed under the weight of I/O demand, the last thing they need from their IT infrastructure is lost cycles spent processing unnecessary I/O.

Even though this random I/O chaos can be easily prevented in the virtual machine layer before it ever leaves the gates, organizations continue to invest in more hardware to battle an increasingly complex problem.

Check out this new paper from IDG, Eliminate the Unnecessary: Unnecessary I/O and its Impact on Performance. You'll understand unnecessary I/O, why it matters, and how getting rid of it will solve performance problems—overnight—without more hardware.

Tags: , , ,

Big Data | Cloud | IntelliMemory | IntelliWrite | SAN | virtualization | V-Locity | VMware

A Blog About Bloggers Who Blog About Us

by Jerry Baldwin 7. February 2013 09:49

On contemplating the impact of his calculating engine, the world’s first computer, Charles Babbage wrote “In turning from the smaller instruments in frequent use to the larger and more important machines, the economy arising from the increase of velocity becomes more striking.” He said that in 1832.

I mention this because the idea holds true today—the bigness of everything, the immediacy of everything, the pace of everything—the greater the increase from one state to another, the more striking the difference. And that’s exactly why—when we put V-locity 4 trialware into the hands of virtualization wizards to test in their lairs—we want them to really, really put it through the wringer. The heavier the workload, the greater the application demand, the more striking the results.

Recently two virtualization pros got their hands on the V-locity 4 30-day trial, set up rigorous testing, and blogged the entire experience:

VMware technical architect amazed by V-locity 4 results

Another virtualization blogger amazed by V-locity 4

 

Tags: , , ,

Big Data | Hyper-V | IntelliMemory | virtualization | V-Locity | VMware

NEW V-locity 4 VM Accelerator Improves VM Performance by up to 50%

by Jeff Medina 10. December 2012 10:00

Today we are very excited to announce the release of V-locity 4 VM Accelerator. With this latest release, V-locity increases VM and application performance by up to 50% and does so without any additional storage hardware.

Let’s face it - in today’s world of virtual environments, we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files.

The impact of this data explosion on server virtualization can often lead to I/O bottlenecks. This is because a physical server running multiple virtual machines (VMs) must often carry out far more I/O operations than one server running a single workload, and typical virtualization environments emulate I/O devices that run less efficiently than native I/O devices.

In essence, virtualization acts like a funnel, combining and mixing many disparate I/O streams, sending out to the disk what becomes a very random I/O pattern. To make matters worse, the more VMs are added, the more the issue is compounded as more I/O is "randomized." All of this has a very negative affect on storage performance, and renders time-honored techniques such as read-ahead buffers and caching algorithms far less effective than in conventional physical environments.

Storage I/O is the most critical issue in a virtualized environment, and can cause organizations to spend a great deal on storage, purchasing more and more disk spindles, but often using only a fraction of their capacity because of performance issues. The outcome is that, due to issues relating to performance bottlenecks in the storage infrastructure, some applications are deemed unable to be virtualized; however, a properly tuned storage environment might have accommodated those applications. So what’s the alternative? The answer is V-locity 4 VM Accelerator. 

V-locity 4 VM Accelerator provides:

  • Increased application performance up to 50%
  • Up to 50% faster access to frequently accessed files
  • Faster I/O performance without the cost of additional storage hardware
  • Increased VM density per physical server up to 50%
  • Extended hardware lifespan by eliminating unnecessary I/Os
  • Automatic and real-time operation for true “Set It and Forget It®” management 

What makes V-locity 4 so effective is its powerful toolkit of proactive technologies, including IntelliWrite,® V-Aware,® CogniSAN,® InvisiTasking® and the new IntelliMemory® RAM caching technology.

New! IntelliMemory™ Caching Technology
IntelliMemory intelligent caching technology boosts active data, improving I/O response time up to 50% or more while also eliminating unnecessary I/O operations from getting into the network or storage.

Improved! IntelliWrite® Technology
IntelliWrite automatically prevents the operating system from breaking files into pieces and writing those pieces in a performance penalized manner. This proactive approach improves performance up to 50% or more while preventing any negative impact to snapshots replication, data deduplication or thin provisioning growth. As this proactive approach happens at the server level, the network and shared storage simply has less I/O operations to transfer and process.

New! Performance Benefit Analyzer
The Performance Benefits Analyzer helps document the performance benefits of V-locity. The benefit analyzer looks at your current system performance, then compares these results to those after using V-locity to provide a detailed report showing specific improvements and benefits to your system.

V-Aware® Technology
V-Aware detects external resource usage from other virtual machines on the virtual platform and eliminates resource contention that might slow performance.

CogniSAN® Technology
CogniSAN detects external resource usage within a shared storage system, such as a SAN, and allows for transparent optimization by not competing for resources utilized by other VMs over the same storage infrastructure. And it does this without intruding in any way into SAN-layer operations.

InvisiTasking® Technology
InvisiTaksing allows all the V-locity 4 "background" operations within the VM to run with zero resource impact on current production.

Set It and Forget It®
Automatic and real-time operation.

For more details and a FREE trial, visit www.condusiv.com/products/v-locity or call a sales representative at 1-800-829-6468.

Space Reclamation, Above and Below

by Damian 7. November 2011 09:29

Thin provisioning is a fairly hot topic in the storage arena, and with good reason. Many zones within the business and enterprise see massive benefit from the scalability of thin provisioning, and it can be a cost saver besides. However, the principle of thin provisioning suffers some unique maladies at both client and storage levels.

Some storage arrays include a feature permitting thin provisioning for their LUNs. This storage layer thin provisioning occurs below the virtual platform storage stack, and essentially means scalable datastores. Horizontal scaling of data stores adds a new tier of agility to the storage ecosystem that some businesses absolutely require.

LUN thin provisioning shouldn’t be confused with Virtual Disk TP, which works at a file level (not array). Thin provisioned VMs can expand based on pre-determined use cases, adding an extra degree of flexibility to storage density. Intelligently combining TP at multiple tiers yields some pretty neat capacity results.

Datastore thin provisioning has been the source of some concern for storage administrators with regards to recovery from over-provisioning. When virtual disks are deleted or copied away from a datastore, the array itself is not led to understand that those storage blocks are now free. You can see how this can lead to needless storage consumption.

vSphere 5 from VMware introduced a solution for this issue. The new vSphere Storage APIs for Array Integration (VAAI) for TP uses the SCSI UNMAP command to tell the storage array that space previously occupied by a VM can be reclaimed. This addresses one aspect of the issue with thin VM growth.

Files are not simply being written to a virtual disk, they’re also deleted with regularity. Unfortunately, there is no associated feature within virtual platforms or Windows to inform the storage array that blocks can be recovered from a thin disk which should have contracted after deletions. Similar to the issue above, this leads to unnecessary storage waste.

With the release of V-locity 3 in 2011, we introduced a new Automatic Space Reclamation engine. This engine automatically zeroes out “dead” free space within thin virtual disks, without requiring that they be taken offline and with no impact on resource usage. So what does this mean? Thin VMs can be compacted, actually reclaiming the deleted space to the storage array for dynamic use elsewhere. The thin virtual disks themselves are kept slimmed down within datastores, giving more control back to the storage admins governing provisioning.

Space Reclamation with V-locity

You can read more about VAAI for TP in vSphere 5 on the VMware blog here.

Tags: , , ,

virtualization | VMware | Windows 7

Optimizing Virtual Platform Disk Performance (ESX)

by Michael 28. June 2011 07:38

Overview 

The intensified demand for IT network efficiency and lower operating costs has been driving the phenomenal growth of virtualization in the past decade, with no signs of slowing. At present, many corporations run more virtualized servers than physical servers.

 

While virtualization provides opportunity for consolidation and better hardware utilization, it’s critically important to recognize and never exceed hardware capacities.  

The importance of ensuring sufficient CPU and memory are well understood, with many processes and management tools available to help plan and properly provision VMs for these critical resources. I/O traffic, network and disk, are more complicated to account for in virtual environments as they tend to be more unpredictable.

In order to better accommodate disk I/O, most virtualization platforms will implement a Storage Area Network (SAN) which can offer greater data throughput, and a dynamic environment to address fluctuations in I/O demands.

While a storage infrastructure can be built out to meet expected demands, there are uncontrollable behaviors that will still impede performance. 

File Fragmentation

As files are written to a general purpose local disk file systems, such as Windows NTFS, a natural byproduct is file fragmentation. File fragmentation is a state in which the data stream of a file is stored in non-contiguous clusters in the file system. Fragmentation occurs on logical volume, and by device drivers is translated to logical blocks, and eventually to physical sectors residing on a storage device. It can be demonstrated as pieces of a file located in a non-contiguous manner. The effect of this file fragmentation is increased I/O overhead, leading to slower system performance for the operating system.

In the case of virtual platforms, a guest operating systems is stored as a file (i.e. set of files) on the virtual platforms file system as a “virtual disk”. A virtual disk is essentially a container file, housing all the files that constitute the OS and user data of a VM.  A virtual disk files can fragment just as any other file can resulting in what amounts to a “logically” fragmented virtual hard disk, which still has typical file fragmentation contained within it. The picture represented to the right would appear as “VirtualServer1.vmdk, 30GB in size, in 4 pieces”.  

 

This situation equates to hierarchical fragmentation or more simply fragmentation-within-fragmentation. Given the relatively static nature and large size of virtual disks, and large allocation unit size of VMFS (typically 1MB), fragmentation of these files is unlikely to cause performance issues in most cases. The focus and solution to fragmentation should be directed at the guest operating system.

Fragmentation within a Windows VM will cause Windows to generate additional unnecessary I/O. This added I/O traffic can be discovered using Windows Performance Monitor, where it is one of the principal causes for Split I/O.  

 

Fragmentation prevention and defragmentation technologies exist to eliminate unnecessary I/O overhead, and improve system performance. Fragmentation prevention solves fragmentation at the source, by actively causing files to be written contiguously via advanced files system drivers. Defragmentation is the action in which file fragments are re-aligned within the file system, into a single extent, so that only the minimal amount of disk I/Os are required to access the file, thereby increasing access speed.  

Partition Alignment 

Depending on your storage protocol and virtual disk type, misaligned partitions can cause additional unnecessary I/O[1]. In the example below in which the ESX and SAN volumes are not properly aligned, a Word file spanning four NTFS clusters causes additional unnecessary I/O in both VMFS and the SAN LUN.  

 

Similarities between Partition Alignment and Fragmentation 

Much like misaligned partitions can cause additional I/O at multiple layers, so does fragmentation. While partitions can be properly aligned once and never require further corrective action, fragmentation will continue to occur, and needs to be regularly addressed.

In the example below, which assume proper partition alignment, a file in eight fragments in the guest OS, causes additional I/Os to be generated at the virtualization platform layer[2] and at the LUN.   

 

Defragmentation in the guest operating system (of this file), eliminates excess I/O when accessing the file as Windows only generates one I/O. This reduction in I/O traffic translates to the host file system and SAN LUN, ensuring efficiencies at each layer.   

 

Best Practices 

Defragmentation of Windows file systems is a VMware recommended performance solution. The VMware Knowledge Base article 1004004[3] states “Defragmenting a disk is required to address problems encountered with an operating system as a result of file system fragmentation. Fragmentation problems result in slow operating system performance.” In order to validate the Vmware statement, tests were performed.

 

Test Environment

  
Configuration

Test Environment Configuration Host OS: ESX Server 4.1 with VMFS (1MB blocks)

Guest OS: Windows Server 2008r2 x64 (3GB RAM, 1 vCPU)

Benchmarking Software: Iometer (http://www.iometer.org/)

Fragmentation Program: FragmentFile.exe (used to fragment a specified file)

Defragmentation Software: V-locity® 3.0 (http://www.diskeeper.com/business/v-locity/)

 

Storage: 10GB test volume in a 40GB virtual disk. VMFS Datastore of 410GB. HP Smart Array P400 controller. RAID 5 (4x 136GB SCSI at 10K RPM) Stripe size of 64KB with a 64KB offset (properly aligned).

Load Generation 

The industry standard benchmarking tool Iometer was used to generate I/O load for these experiments.  

Iometer configuration options used as variables in these experiments:

• Transfer request sizes: 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB

• Percent random or sequential distribution: for each transfer request size, 0 percent and 100 percent random accesses were selected

• Percent read or write distribution: for each transfer request size, 0 percent and 100 percent read accesses were selected 

Iometer parameters that were held constant for all tests:

• Size of volume: 10GB

• Size of Iometer test file (iobw.tst): 8,131,204 KB (~7.75GB)

• Number of outstanding I/O operations: 16

• Runtime: 4 minutes

• Ramp-up time: 60 seconds

• Number of workers to spawn automatically: 1 

The following is excerpted from a VMware white paper[4], and helps to explain why the Iometer parameters were used. 

Servers typically run a mix of workloads consisting of different access patterns and I/O data sizes.

Within a workload there may be several data transfer sizes and more than one access pattern.There are a few applications in which access is either purely sequential or purely random. For example, database logs are written sequentially. Reading this data back during database recovery is done by means of a sequential read operation. Typically, online transaction processing (OLTP) database access is predominantly random in nature. 

The size of the data transfer depends on the application and is often a range rather than a single value. For Microsoft Exchange, the I/O size is generally small (from 4KB to 16KB), Microsoft SQL Server database random read and write accesses are 8KB, Oracle accesses are typically 8KB, and Lotus Domino uses 4KB. On the Windows platform, the I/O transfer size of an application can be determined using Perfmon.

In summary, I/O characteristics of a workload are defined in terms of the ratio of read operations to write operations, the ratio of sequential accesses to random accesses, and the data transfer size. Often, a range of data transfer sizes may be specified instead of a single value.  

Create Fragmentation 

The FragmentFile.exe tool was used to fragment the Iometer test file (iobw.tst) into 568,572 fragments, a mid-range amount of fragmentation for a production server. The statistics collected from an analysis of the volume (shown below) were performed with V-locity.

Test Procedure 

The primary objective was to characterize the performance of fragmented versus defragmented virtual machines for a range of data sizes across a variety of access patterns. The data sizes selected were 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB. The access patterns were restricted to a combination of 100 percent read or write and 100 percent random or sequential. Each of these four workloads was tested for eight data sizes, for a total of 32 data points per workload.

In order to isolate the impact of fragmentation only the test VM was powered on and active for the duration of the tests.

For the initial run, Iometer created a non-fragmented file, and performance data was collected. Then FragmentFile.exe tool was used to fragment the Iometer test file, the VM rebooted, and the test procedure re-run. This resulted in data sets for both non-fragmented and fragmented scenarios. The results are graphed below.  

Performance Results  

As the graphs show, all workloads show an increase in throughput when the volume [file] is defragmented (i.e. not fragmented).  It also becomes clear that as the I/O read/write size increases, the fragmentation-induced I/O latency increases dramatically.  The greatest improvements of a contiguous file are found with file reads; both random and sequential. 

 

Random Reads  
 
Random Writes 

Sequential Reads

Sequential Writes

Conclusion

 

Fragmentation demonstratively impedes performance of Windows guest operating systems.  While the tests depicted were executed on a singular VM, the issue becomes exponentially worse in a multi-VM environment wherein each VM suffers from file fragmentation.  As server virtualization establishes a symbiotic relationship, it is important to remember that generating disk I/O in one virtual machine affects I/O requests from other virtual systems.  Therefore latencies in one VM will artificially inflate latency in co-located virtual machines (VMs that share a common platform).  

Fragmentation artificially inflates the amount of disk I/O requests which, on a virtual machine platform, compounds the disk bottleneck even more so than on conventional systems.

Eliminating fragmentation in VMs, and the corresponding unnecessary disk I/O traffic, is vital to platform-wide performance and enhances the ability to host more VMs on a shared infrastructure.

You can download the PDF white paper here: Optimizing Virtual Platform Disk Performance.pdf (1.04 mb)

[1] VMware guide to proper partition alignment: http://www.vmware.com/pdf/esx3_partition_align.pdf
[2] It should be noted that VMFS, in the example above need only read the actual amount of data requested in multiples of 512 byte sectors, and does not need to read an entire 1MB block.  
              

Tags:

SAN | Defrag | V-Locity | SAN | VMware | V-Locity | white paper | VMware | white paper

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar