Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

V-locity I/O Reduction Software Put to the Test on 3500 VMs

by Brian Morin 17. March 2016 04:18

As much as we commonly mention the expected performance gains from V-locity® I/O reduction software is 50-300% faster application performance, that 50-300% can represent quite a range - a correlation relative to how badly systems are taxed by I/O inefficiencies in virtual environments that are subsequently streamlined by V-locity. While some workloads experience 300% throughput gains, other workloads in the same environment see 50% gains.

While there is already plenty of V-locity performance validation represented in 15 published case studies that all reveal a doubling in VM performance, we wanted to get an idea of what V-locity delivers on average across a large scale. So we decided to take off our “rose-colored” glasses of what we think our software does and handed over the last 3,450 VMs that tested V-locity to ESG Labs, who examined the raw data from over 100 sites and PUBLISHED THE FINDINGS IN THIS REPORT.

Here are the key findings:

·         Reduced read I/O to storage. ESG Lab calculated 55% of systems saw a reduction of 50% in the number of read I/Os that get serviced by the underlying storage

·         Reduced write I/O to storage. As a result of I/O density increases, ESG Lab witnessed a 33% reduction in write I/Os across 27% of the systems. In addition, 14% of systems experienced a 50% or greater reduction in write I/O from VM (virtual machine) to storage.

·         Increased throughput. ESG Lab witnessed throughput performance improvements of 50% or more for 43% of systems, while 29% of systems experienced a 100% increase in throughput, and as much as 300% increased levels of throughput for 8% of systems.

·         Decreased I/O response time. ESG Lab calculated that systems with 3GB of available DRAM achieved a 40% reduction in response time across all I/O operations.

·         Increased IOPS. ESG Lab found that 25% of systems saw IOPS increase by 50% or more.

 

The key take-away from this analysis is demonstrating the sizeable performance loss virtualized organizations suffer in regard to I/O inefficiencies that can be easily solved by V-locity streamlining I/O at the guest level on Windows VMs. Whereas most organizations typically respond to I/O performance issues by taking the brute-force approach of throwing more expensive hardware at the problem, V-locity demonstrates the efficiencies organizations achieve at a fraction of the cost of new hardware by simply solving the root-cause problem first.

Tags: , , , , , , , ,

SAN | virtualization | V-Locity

Largest-Ever I/O Performance Study

by Brian Morin 28. January 2016 09:10

Over the last year, 2,654 IT Professionals took our industry-first I/O Performance Survey, which makes it the largest I/O performance survey of its kind. The key findings from the survey reveal an I/O performance struggle for virtualized organizations as 77% of all respondents indicated I/O performance issues after virtualizing. The full 17 page report is available for download at http://learn.condusiv.com/2015survey.html.

Key findings in the survey include:

- More than 1/3rd of respondents (36%) are currently experiencing staff or customer complaints regarding sluggish applications running on MS SQL or Oracle

- Nearly 1/3rd of respondents (28%) are so limited by I/O bottlenecks that they have reached an "I/O ceiling" and are unable to scale their virtualized infrastructure

- To improve I/O performance since virtualizing, 51% purchased a new SAN, 8% purchased PCIe flash cards, 17% purchased server-side SSDs, 27% purchased storage-side SSDs, 16% purchased more SAS spindles,       6% purchased a hyper-converged appliance

- In the coming year, to remediate I/O bottlenecks, 25% plan to purchase a new SAN, 8% plan to purchase a hyper-converged appliance, 10% will purchase SAS spindles, 16% will purchases server-side SSDs, 8% will   purchase PCIe flash cards, 27% will purchase storage-side SSDs, 35% will purchase nothing in the coming year

- Over 1,000 applications were named when asked to identify the top two most challenging applications to support from a systems performance standpoint. Everything in the top 10 was an application running on top of   a database

- 71% agree that improving the performance of one or two applications via inexpensive I/O reduction software to avoid a forklift upgrade is either important or urgent for their environment

As much as virtualization has provided cost-savings and improved efficiency at the server-level, those cost savings are typically traded-off for backend storage infrastructure upgrades to handle the new IOPS requirements from virtualized workloads. This is due to I/O characteristics that are much smaller, more fractured, and more random than they need to be.  The added complexity that virtualization introduces to the data path via the “I/O blender” effect that randomizes I/O from disparate VMs, and the amplification of Windows write inefficiencies at the logical disk layer erodes the relationship between I/O and data, generating a flood of small, fractured I/O. This compounding effect between the I/O blender and Windows write inefficiencies creates “death by a thousand cuts” regarding system performance, creating the perfect trifecta for poor performance – small, fractured, random I/O.

Since native virtualization out-of-the box does nothing to solve this problem, organizations are left with little choice but accept the loss of throughput from these inefficiencies and overbuy and overprovision for performance from an IOPS standpoint since they are twice as IOPS dependent than they actually need to be…except for Condusiv customers who are using V-locity® I/O reduction software to see 50-300% faster application performance on the hardware they already have by solving this root cause problem at the VM OS-layer.

Note - Respondents from companies with employee sizes under 100 employees were excluded from the results, so results would not be skewed by the low end of the SMB market.

SAN Fragmentation Controversy Incites Attack from NetApp

by Brian Morin 22. April 2015 08:38

I can’t blame it all on NetApp.

It all started with THIS TWEET.

I’ll admit NetApp was misled by an inadvertent title from a well-intentioned editor at Searchstorage.com, but the CTO Office at NetApp obviously didn’t read the whole article beyond the headline.

Dave Raffo, the Senior News Director for Search Storage wrote a feature article on the FUD surrounding SAN and fragmentation and how the latest release of Diskeeper® 15 Server eliminates performance-robbing fragmentation without “defragging.” In fact, here is one of Dave’s direct quotes from the article:

“It’s not defragging disks in SAN arrays, but preventing files from being broken into pieces before being written to hard disk drives or solid-state drives non-sequentially. That way, it prevents fragmentation before it becomes an issue.”

Unfortunately, the word “defrag” was inadvertently put into the headline and opening sentence of the article, which triggered a knee jerk reaction from the CTO Office at NetApp who tweeted, “NetApp does not recommend using defragmentation software on our kit, period.”

Attention all SAN vendors: Diskeeper 15 Server is not a “defrag” utility! 

Diskeeper 15 proactively prevents fragmentation from occurring in the first place at the Windows file system level. As a result, Diskeeper is not competing with RAID controllers for physical block management or triggering copy-on-write activity by moving blocks like a traditional “defrag.” Instead, Diskeeper 15 Server complements the SAN by making sure it no longer receives small, fractured random I/O from Windows. This patented approach reduces the IOPS requirement for any given workload and improves throughput on existing systems from physical server to storage, so administrators can get more from the systems they have by moving more data in less time. 

We find it takes someone a few minutes to stop thinking in terms of “defragging” after-the-fact and start thinking in terms of how much a SAN can benefit from fragmentation prevention. Here’s some recent media coverage snippets that explain:

“As Condusiv demonstrates, the level of fragmentation of the logical disk inflates the IOPS requirement for any given workload with a surplus of small, fractured I/O. While part of the performance problem can be hidden behind high performance flash, a high fragmented environment wastes much of the investment in flash.” – Storage Switzerland, full article ->

“Businesses that are switching over to flash arrays should see a benefit as well. Since fragmentation at the logical layer is inherent in the fabric of Windows, the flash technology will still have a higher I/O overhead. Diskeeper will help organizations switching to flash get the most for their investments.” – Storage Review, full article ->

“Diskeeper 15 Server is the first fragmentation protection for SAN storage connected to physical servers. It prevents fragmentation in real time at the logical disk layer, increasing IO density so more data can be processed.” – Channel Buzz, full article ->

“Condusiv's Diskeeper 15 extends the benefits of defragmentation out to the SAN with the novel technique of reducing fragmentation before the data leaves the server.” – Tom’s IT Pro, full article ->

“Condusiv has added a new twist to its Diskeeper line…by tackling for the first time the question of how to defragment SAN storage.” – CRN, full article ->

Keep in mind, those who want to make sure their virtualized workloads connected to SAN are optimized as well can use V-locity® I/O reduction for virtual servers. With V-locity, users get the added benefit of server-side DRAM caching to further reduce I/O to storage and satisfy data even quicker.

Tags: , , , ,

Diskeeper

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

Evaluating IntelliWrite In Your Environment

by Damian 1. March 2012 10:18

IntelliWrite technology has been around for about two years now, optimizing literally millions of systems worldwide. It seamlessly integrates with Windows, delivering optimized writes upon initial I/O (no need for additional, after-the-fact file movement). What does that translate to? Actual fragmentation prevention.

Interestingly, we do occasionally get asked how it bears up against modern storage technologies:

“Don’t the latest SANs optimize themselves?”

“Do I really need this on my VMs? They aren’t physical hard drives, you realize…”

Or even…

“I don’t need to defragment my SAN-hosted VMs.”

Now, there are some factors which must be considered when you’re looking at optimizing I/O in your infrastructure:

  • I/O from Windows is just abstracted Reads and Writes from a higher layer, even directly over a bare metal disk.
  • Due to the way current Windows file systems are structured, I/O can be greatly constrained by file fragmentation—no matter what storage lies underneath it.
  • Fragmentation in Windows means more I/O requests from Windows—even if files are stored perfectly contiguously at the SAN level, Windows still has to send X amount of requests because of the fragmentation that it sees within its top level.
  • File fragmentation is not the same as block-level (read: SAN-level) fragmentation. Many SAN utilities resolve issues of block-level fragmentation admirably; these do not address file fragmentation.
  • Finally, and as noted above, IntelliWrite prevents fragmentation in real time by improving Windows “Best Fit” file write logic. This means solving file fragmentation issues with no additional writes which could create issues with SAN de-dup or various copy-on-write data redundancy measures.

We performed testing with a customer recently in order to validate the benefits of IntelliWrite over cutting-edge storage. This customer’s SAN array is less than a year old, and while we don’t want to go into specifics in order to avoid seeming partial, it’s from one of today’s leading SAN vendors.

Testing involved apples to apples comparison on a production VM hosted over the SAN. A non-random workload was generated 3 times, recording Windows-level file fragmentation, several PerfMon metrics, and time to complete the workload. The test was then repeated 3 times, now with IntelliWrite enabled on the same VM’s test volume.

Here were the results:

 

 

The breakdown:

Fragmentation reduction with IntelliWrite: 89%

Split IO/sec reduction with IntelliWrite: 81%

Avg. Disk Queue Length reduction with IntelliWrite: 71%

…and with the improvement to these disk performance metrics, the overall time to complete the same actual file operations was reduced by: 48%

The conclusion? If you were asking the same sorts of questions posed earlier, evaluate IntelliWrite for yourself. Remember, the graphs above are on contemporary storage hardware—the older your storage equipment, the greater the improvement in application performance you can expect from investing in optimization. Can you afford to not be seeing maximum performance numbers out of your infrastructure and application investments?

The evaluation is quick and fully transparent. Call today to speak with a representative about evaluating Diskeeper or V-locity in your environment.

Tags: , ,

Diskeeper | IntelliWrite | SAN | V-Locity

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar