After surveying thousands of IT professionals, we’ve found that the vast majority agree that Windows performance degrades over time – they just don’t agree on how much. Unbeknown to most is what the problem actually is, which is I/O degradation as the size of writes and reads become excessively smaller than they should. This inefficiency is akin to moving a gallon of water across a room with dixie cups instead of a single gallon jug. Even if you have all-flash storage and can move those dixie cups quickly, you are still not processing data nearly as fast as you could.

In the same surveys, we’ve also found that the vast majority of IT professionals are aware of the performance penalty of the “I/O blender” effect in a virtual environment, which is the mixing and randomizing of I/O streams from the disparate virtual machines on the same host. What they don’t agree on is how much. And, they are not aware of how the issue is compounded by Windows write inefficiencies.

The Size off the Problem

Now that the latest Condusiv in-product dashboard has been deployed across thousands of customer systems who have upgraded their Condusiv I/O reduction software to the latest version, customers are getting their first-ever granular view into what I/O reduction software is doing for their systems in terms of seeing the exact percentage and number of read and write I/O operations eliminated from storage and how much I/O time that saves any given system or group of systems. Ultimately, it’s a picture into the size of the problem – all the I/O traffic that is mere noise – all the unnecessary I/O that dampens system performance.

In our surveys, we found IT professionals all over the map on the size of the performance penalty from inefficiencies. Some are quite positive the performance penalty is no more than 10%. More put that range at 20%. Most put it at 30%. Then it dips back down with fewer believing a 40% penalty with the fewest throwing the dart at 50%.

As it turns out, our latest version has been able to drop a pin on that.

There are variances that move the extent of the penalty on any given workload such as system configuration and/or workload behavior. Some systems might be memory constrained, some workloads might be too light to matter, etc.

After Thousands of Installs…

However, after thousands of installs over the last several months, we see a very consistent range on the vast majority of systems in which 30-40% of all I/O traffic is being offloaded from underlying storage with our software. Not only does that represent an immediate performance boost for users, but it also means 30-40% of I/O headroom is handed back to the storage subsystem that can now use those IOPS for other things.

The biggest factor to consider is the 30-40% improvement number represents systems where memory has not been increased beyond the typical configuration that most administrators use. Customers who offload 50% or more of I/O traffic from storage are the ones with read heavy workloads who beef up memory server-side to get more from the software. For every additional 1-2GB of memory added, another 10-25% of read traffic is offloaded. Some customers are more aggressive and leverage as much memory as possible server-side to offload 90% or more I/O traffic on read-heavy applications.

Ensuring Optimal Performance

As expensive as new all-flash systems are, how much sense does it make to pay for all those IOPS only to allow 30-40% of those IOPS to be chewed up by unnecessary, noisy I/O? By addressing the two biggest penalties that dampen performance (Windows write inefficiencies compounded by the “I/O blender” effect), Condusiv I/O reduction software ensures optimal performance and protects the CapEx investments made into servers and storage by extending their useful life.