Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Top 5 Questions from V-locity and Diskeeper Customers

by Brian Morin 20. April 2016 05:00

After having chatted with 50+ customers the last three months, I’ve heard the same five questions enough times to turn it into a blog entry, and a lot of it has to do with flash:

 

1. Do Condusiv products still “defrag” like in the old days of Diskeeper?

No. Although users can use Diskeeper to manually defrag if they so choose, the core engines in Diskeeper and V-locity have nothing to do with defragmentation or physical disk management. The patented IntelliWrite® engine inside Diskeeper and V-locity adds a layer of intelligence into the Windows operating system enabling it improve the sequential nature of I/O traffic with large contiguous writes and subsequent reads, which improves performance benefit to both SSDs and HDDs. Since I/O is being streamlined at the point of origin, fragmentation is proactively eliminated from ever becoming an issue in the first place. Although SSDs should never be “defragged,” fragmentation prevention has enormous benefits. This means processing a single I/O to read or write a 64KB file instead of needing several I/O. This alleviates IOPS inflation of workloads to SSDs and cuts down on the number of erase cycles required to write any given file, improving write performance and extending flash reliability.

 

2. Why is it more important to solve Windows write inefficiencies in virtual environments regardless of flash or spindles on the backend? 

Windows write inefficiencies are a problem in physical environments but an even bigger problem in virtual environments due to the fact that multiple instances of the OS are sitting on the same host, creating a bottleneck or choke point that all I/O must funnel through. It’s bad enough if one virtual server is being taxed by Windows write inefficiencies and sending down twice as many I/O requests as it should to process any given workload…now amplify that same problem happening across all the VMs on the same host and there ends up being a tsunami of unnecessary I/O overwhelming the host and underlying storage subsystem. The performance penalty of all of this unnecessary I/O ends up getting further exacerbated by the “I/O Blender” that mixes and randomizes the I/O streams from all the VMs at the point of the hypervisor before sending out to storage a very random pattern, the exact type of pattern that chokes flash performance the most - random writes. V-locity’s IntelliWrite® engine writes files in a contiguous manner which significantly reduces the amount of I/O required to write/read any given file. In addition, IntelliMemory® caches reads from available DRAM. With both engines reducing I/O to storage, that means the usual requirement from storage to process 1GB via 80K I/O drops to 60K I/O at a minimum, but often down to 50K I/O or 40K I/O. This is why the typical V-locity customer sees anywhere from 50-100% more throughput regardless of flash or spindles on the backend because all the optimization is occurring where I/O originates.

VMware’s own “vSphere Monitoring and Performance Guide” calls for “defragmentation of the file system on all guests” as its top performance best practice tip behind adding more memory. When it comes to V-locity, nothing ever has to be “defragged” since fragmentation is proactively eliminated from ever becoming a problem in the first place.

 

3. How Does V-locity help with flash storage? 

One of the most common misnomers is that V-locity is the perfect complement to spindles, but not for flash. That misnomer couldn’t be further from the truth. The fact is, most V-locity customers run V-locity on top of a hybrid (flash & spindles) array or all-flash array. And this is because without V-locity, the underlying storage subsystem has to process at least 35% more I/O than necessary to process any given workload.

As much as virtualization has been great for server efficiency, the one downside is the complexity introduced to the data path, resulting in I/O characteristics that are much smaller, more fractured, and more random than it needs to be. This means flash storage systems are processing workloads 30-50% slower than they should because performance is suffering death-by-a-thousand cuts from all this small, tiny, random I/O that inflates IOPS and chews up throughput. V-locity streamlines I/O to be much more efficient, so twice as much data can be carried with each I/O operation. This significantly improves flash write performance and extends flash reliability with reduced erase cycles. In addition, V-locity establishes a tier-0 caching strategy using idle, available DRAM to cache reads. As little as 3GB of available memory drives an average of 40% reduction in response time (see source). By optimizing writes and reads, that means V-locity drives down the amount of I/O required to process any given workload. Instead of needing 80K I/O to process a GB of data, users typically only need 50K I/O or sometimes even less.

For more on how V-locity complements hybrid storage or all-flash storage, listen to the following OnDemand Webinar I did with a flash storage vendor (Nimble) and a mutual customer who uses hybrid storage + V-locity for a best-of-breed approach for I/O performance.

 

4. Is V-locity’s DRAM caching engine starving my applications of precious memory by caching? 

No. V-locity dynamically uses what Windows sees as available and throttles back if an application requires more memory, ensuring there is never an issue of resource contention or memory starvation. V-locity even keeps a buffer so there is never a latency issue in serving back memory. ESG Labs examined the last 3,500 VMs that tested V-locity and noted a 40% average reduction in response time (see source). This technology has been battle-tested over 5 years across millions of licenses with some of largest OEMs in the industry.

 

5. What is the difference between V-locity and Diskeeper? 

Diskeeper is for physical servers while V-locity is for virtual servers. Diskeeper is priced per OS instance while V-locity is now priced per host, meaning V-locity can be installed on any number of virtual servers on that host. Diskeeper Professional is for physical clients. The main feature difference is whereas Diskeeper keeps physical servers or clients running like new, V-locity accelerates applications by 50-300%. While both Diskeeper and V-locity solve Windows write inefficiencies at the point of origin where I/O is created, V-locity goes a step beyond by caching reads via idle, available DRAM for 50-300% faster application performance. Diskeeper customers who have virtualized can opt to convert their Diskeeper licenses to V-locity licenses to drive value to their virtualized infrastructure.

 

Stay tuned on the next major release of Diskeeper coming soon that may inherit similar functionality from V-locity.

V-locity I/O Reduction Software Put to the Test on 3500 VMs

by Brian Morin 17. March 2016 04:18

As much as we commonly mention the expected performance gains from V-locity® I/O reduction software is 50-300% faster application performance, that 50-300% can represent quite a range - a correlation relative to how badly systems are taxed by I/O inefficiencies in virtual environments that are subsequently streamlined by V-locity. While some workloads experience 300% throughput gains, other workloads in the same environment see 50% gains.

While there is already plenty of V-locity performance validation represented in 15 published case studies that all reveal a doubling in VM performance, we wanted to get an idea of what V-locity delivers on average across a large scale. So we decided to take off our “rose-colored” glasses of what we think our software does and handed over the last 3,450 VMs that tested V-locity to ESG Labs, who examined the raw data from over 100 sites and PUBLISHED THE FINDINGS IN THIS REPORT.

Here are the key findings:

·         Reduced read I/O to storage. ESG Lab calculated 55% of systems saw a reduction of 50% in the number of read I/Os that get serviced by the underlying storage

·         Reduced write I/O to storage. As a result of I/O density increases, ESG Lab witnessed a 33% reduction in write I/Os across 27% of the systems. In addition, 14% of systems experienced a 50% or greater reduction in write I/O from VM (virtual machine) to storage.

·         Increased throughput. ESG Lab witnessed throughput performance improvements of 50% or more for 43% of systems, while 29% of systems experienced a 100% increase in throughput, and as much as 300% increased levels of throughput for 8% of systems.

·         Decreased I/O response time. ESG Lab calculated that systems with 3GB of available DRAM achieved a 40% reduction in response time across all I/O operations.

·         Increased IOPS. ESG Lab found that 25% of systems saw IOPS increase by 50% or more.

 

The key take-away from this analysis is demonstrating the sizeable performance loss virtualized organizations suffer in regard to I/O inefficiencies that can be easily solved by V-locity streamlining I/O at the guest level on Windows VMs. Whereas most organizations typically respond to I/O performance issues by taking the brute-force approach of throwing more expensive hardware at the problem, V-locity demonstrates the efficiencies organizations achieve at a fraction of the cost of new hardware by simply solving the root-cause problem first.

Tags: , , , , , , , ,

SAN | virtualization | V-Locity

Largest-Ever I/O Performance Study

by Brian Morin 28. January 2016 09:10

Over the last year, 2,654 IT Professionals took our industry-first I/O Performance Survey, which makes it the largest I/O performance survey of its kind. The key findings from the survey reveal an I/O performance struggle for virtualized organizations as 77% of all respondents indicated I/O performance issues after virtualizing. The full 17 page report is available for download at http://learn.condusiv.com/2015survey.html.

Key findings in the survey include:

- More than 1/3rd of respondents (36%) are currently experiencing staff or customer complaints regarding sluggish applications running on MS SQL or Oracle

- Nearly 1/3rd of respondents (28%) are so limited by I/O bottlenecks that they have reached an "I/O ceiling" and are unable to scale their virtualized infrastructure

- To improve I/O performance since virtualizing, 51% purchased a new SAN, 8% purchased PCIe flash cards, 17% purchased server-side SSDs, 27% purchased storage-side SSDs, 16% purchased more SAS spindles,       6% purchased a hyper-converged appliance

- In the coming year, to remediate I/O bottlenecks, 25% plan to purchase a new SAN, 8% plan to purchase a hyper-converged appliance, 10% will purchase SAS spindles, 16% will purchases server-side SSDs, 8% will   purchase PCIe flash cards, 27% will purchase storage-side SSDs, 35% will purchase nothing in the coming year

- Over 1,000 applications were named when asked to identify the top two most challenging applications to support from a systems performance standpoint. Everything in the top 10 was an application running on top of   a database

- 71% agree that improving the performance of one or two applications via inexpensive I/O reduction software to avoid a forklift upgrade is either important or urgent for their environment

As much as virtualization has provided cost-savings and improved efficiency at the server-level, those cost savings are typically traded-off for backend storage infrastructure upgrades to handle the new IOPS requirements from virtualized workloads. This is due to I/O characteristics that are much smaller, more fractured, and more random than they need to be.  The added complexity that virtualization introduces to the data path via the “I/O blender” effect that randomizes I/O from disparate VMs, and the amplification of Windows write inefficiencies at the logical disk layer erodes the relationship between I/O and data, generating a flood of small, fractured I/O. This compounding effect between the I/O blender and Windows write inefficiencies creates “death by a thousand cuts” regarding system performance, creating the perfect trifecta for poor performance – small, fractured, random I/O.

Since native virtualization out-of-the box does nothing to solve this problem, organizations are left with little choice but accept the loss of throughput from these inefficiencies and overbuy and overprovision for performance from an IOPS standpoint since they are twice as IOPS dependent than they actually need to be…except for Condusiv customers who are using V-locity® I/O reduction software to see 50-300% faster application performance on the hardware they already have by solving this root cause problem at the VM OS-layer.

Note - Respondents from companies with employee sizes under 100 employees were excluded from the results, so results would not be skewed by the low end of the SMB market.

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

The Real Meaning of Disruption

by Jerry Baldwin 25. March 2014 07:26

Disruption is a popular word these days. Is it the replacement for innovation, which we’ve overused into pointlessness over the past ten years? Maybe, but disruption means more: it carries the weight of shaking up the status quo—of not only creating something new—but creating something that shifts models and opens our eyes to possibility.

We talk about disruption like it’s a new thing, cooked-up by a team of marketers in a conference room somewhere. But its roots lie in a theory introduced by an Austrian economist in 1942. Joseph Schumpeter captured the world’s attention in only six pages, where he clarified the free market’s muddled way of delivering progress. He called this theory Creative Destruction.

I don’t have a PhD in economics so I’ll dodge the minutiae, but the overall meaning of the theory is what resonates—as it should to anyone engaged in business today. “Creative destruction is the same process of industrial mutation that revolutionizes the economic structure from within, incessantly destroying the old one and incessantly creating a new one.”

Simply put, to continually move forward we must be willing to embrace the destruction of what came before.

Economists often use the Montgomery Ward example. MW started as a mail-order company in the nineteenth century. With no cars or trucks in those days, and most Americans living in small rural pockets, it was too costly to ship products to widely-dispersed local stores. That model would result in a high cost to the consumer, making goods too expensive for the average buyer. So instead MW turned its enormous Chicago warehouse (complete with a railroad running through it) into its hub, and used the already well-established US mail to deliver products directly to customers at a low cost.

And a successful model it was. With a high-volume of sales, MW could charge lower prices. By the 1890s Montgomery Ward was the world’s largest retailer. But all that came to an end. Why? Because over time Americans were moving to urban centers and could afford a higher standard of living.

Robert Wood was an MW exec, and may well be the first adopter of Big Data on record. He analyzed data, studied the census, and saw the shift in American lifestyle. He presented his findings to leadership, suggesting that selling goods through a chain of urban department stores would replace the mail-order model as the most profitable path for retail.

So MW did what any unmovable enterprise would do. They fired him.

Meanwhile James Cash Penney recognized the same trends as Robert Wood, and it wasn’t long before J.C. Penney stores put a serious dent in MW’s profits. The mail-order giant was late to the party and couldn’t change direction fast enough. And Robert Wood? He went to work for Sears, who also took a big bite out of MW’s market share.

Remind you of anything? Netflix and Blockbuster. Blockbuster was the established enterprise, staring streaming revenue in the face, yet unable to let go of profits from the rental market. Netflix is the newcomer—the creative destructor—free from the ball and chain of a dying business model, free to focus 100% on new streaming revenue. And we all know the end of that story.

We also know that business is anything but stagnant, there are waves and cycles, and the same is true of companies. It’s very difficult (impossible?) for an established enterprise to turn around, to be truly disruptive, and to compete with newcomers. But if you’re an organization that can stomach the destruction of what came before to create new growth and opportunity, you might stand a chance.

Condusiv is unique. We’re a software company with a 30-year history as an established player in file system management and caching technologies. But in the spirit of disruption—of creative destruction—we’ve shifted all our focus and resources to V-locity, our flagship product that in itself signifies disruption in how organizations run their data centers. A 100% software approach to solving today’s application performance issues at the root on the VM or OS layer, without requiring any additional hardware.

When you embrace creative destruction you must ceaselessly devalue existing wealth in order to clear the ground for the creation of new wealth.

It’s true, our discussions may be very different from those had in other conference rooms. But that’s what disruption should be about—that’s how Condusiv can deliver performance in a software solution at 1/10th the cost of the hardware alternative. While others go for margin erosion—trying to win business by saving customers 20 cents on every dollar spent, we help customers save 90 cents on every dollar.

As a channel partner, we allow you to fulfill your mission to your customers with added value, while also protecting your bottom line with a generous software margin that grows your profit over razor thin commodity hardware margins. You win. Your customer wins. A new market is emerging where hardware becomes the 2nd option for improved application performance.

Welcome to our conference room.

RecentComments

Comment RSS

Month List

Calendar

<<  August 2018  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar