Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

V-locity 6.0 Solves Death by a Thousand Cuts in Virtual Environments

by Brian Morin 12. August 2015 08:04

If you haven’t already heard the pre-announcement buzz on V-locity® 6.0 I/O reduction software that made a splash in the press, it’s being released in a couple weeks. To understand why it’s significant and why it’s an unprecedented 3X FASTER than its predecessor is to understand the biggest factor that dampens application performance the most in virtual environments - the problem of increasingly smaller, fractured, and random I/O. That kind of I/O profile is akin to pouring molasses on compute and storage systems. Processing I/O with those characteristics makes systems work much harder than necessary to process any given workload. Virtualized organizations stymied by sluggish performance related to their most I/O intensive applications suffer in large part to a problem that we call “death by a thousand cuts” – I/O that is smaller, more fractured, and more random than it needs to be.

Organizations tend to overlook solving the problem and reactively attempt to mask the problem with more spindles or flash or a forklift storage upgrade. Unfortunately, this approach wastes much of any new investment in flash since optimal performance is being robbed by I/O inefficiencies at the Windows OS layer and also at the hypervisor layer.

V-locity® version 6 has been built from the ground-up to help organizations solve their toughest application performance challenges without new hardware. This is accomplished by optimizing the I/O profile for greater throughput while also targeting the smallest, random I/O that is cached from available DRAM to reduce latency and rid the infrastructure of the kind of I/O that penalizes performance the most.

Although much is made about V-locity’s patented IntelliWrite® engine that increases I/O density and sequentializes writes, special attention was put into V-locity’s DRAM read caching engine (IntelliMemory®) that is now 3X more efficient in version 6 due to changes in the behavioral analytics engine that focuses on "caching effectiveness" instead of "cache hits.”

Leveraging available server-side DRAM for caching is very different than leveraging a dedicated flash resource for cache whether that be PCI-e or SSD. Although DRAM isn’t capacity intensive, it is exponentially faster than a PCI-e or SSD cache sitting below it, which makes it the ideal tier for the first caching tier in the infrastructure. The trick is in knowing how to best use a capacity-limited but blazing fast storage medium.

Commodity algorithms that simply look at characteristics like access frequency might work for  capacity intensive caches, but it doesn’t work for DRAM. V-locity 6.0 determines the best use of DRAM for caching purposes by collecting data on a wide range of data points (storage access, frequency, I/O priority, process priority, types of I/O, nature of I/O (sequential or random), time between I/Os) - then leverages its analytics engine to identify which storage blocks will benefit the most from caching, which also reduces "cache churn" and the repeated recycling of cache blocks. By prioritizing the smallest, random I/O to be served from DRAM, V-locity eliminates the most performance robbing I/O from traversing the infrastructure. Administrators don’t need to be concerned about carving out precious DRAM for caching purposes as V-locity dynamically leverages available DRAM. With a mere 4GB of RAM per VM, we’ve seen gains from 50% to well over 600%, depending on the I/O profile.

With V-locity 5, we examined data from 2576 systems that tested V-locity and shared their before/after data with Condusiv servers. From that raw data, we verified that 43% of all systems experienced greater than 50% reduction in latency on reads due to IntelliMemory. While that’s a significant number in its own right by simply using available DRAM, we can’t wait to see how that number jumps significantly for our customers with V-locity 6.

Internal Iometer tests reveal that the latest version of IntelliMemory in V-locity 6.0 is 3.6X faster when processing 4K blocks and 2.0X faster when processing 64K blocks.

Jim Miller, Senior Analyst, Enterprise Management Associates had this to say, "V-locity version 6.0 makes a very compelling argument for server-side DRAM caching by targeting small, random I/O - the culprit that dampens performance the most. This approach helps organizations improve business productivity by better utilizing the available DRAM they already have. However, considering the price evolution of DRAM, its speed, and proximity to the processor, some organizations may want to add additional memory for caching if they have data sets hungry for otherworldly performance gains."

Finally, one of our customers, Rich Reitenauer, Manager of Infrastructure Management and Support, Alvernia University, had this to say, "Typical IT administrators respond to application performance issues by reactively throwing more expensive server and storage hardware at them, without understanding what the real problem is. Higher education budgets can't afford that kind of brute-force approach. By trying V-locity I/O reduction software first, we were able to double the performance of our LMS app sitting on SQL, stop all complaints about performance, stop the application from timing out on students, and avoid an expensive forklift hardware upgrade."

For more on the I/O Inefficiencies that V-locity solves, read Storage Switzerland’s Briefing on V-locity 6.0 ->

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

The Gartner Cool Vendor Report in Storage Technologies: Vanity or Value

by Robert Woolery 22. April 2014 08:58

We all like lists that rank who is cool, best in class or top score in a buyer’s guide. Every year, Gartner releases their prized "Cool Vendor" selection. But is it just vanity for the vendor selected or is there actual, tangible value to the prospective customer that makes you care?

We believe one significant difference about the Cool Vendor Report compared to other reports is Gartner does a deep-dive examination of compelling vendors across the technology landscape, then upon selecting their "cool vendors" for the year, they reveal their analysis, why the vendor is cool, challenges the vendor faces and who should care.

Of all the technology companies on the landscape, Gartner chose to highlight four this year in the area of storage technologies, providing research into their innovative products and/or services.

When we were brainstorming our flagship product V-locity, we spoke to hundreds customers and we heard a common theme – performance problems in virtual environments whereby users were buying lots of hardware to solve an inherent software problem per the "I/O blender" effect.

As we dug in, a clearer picture emerged. We've become conditioned to medicating performance problems with hardware. And why not? In the past, performance gains were growing by 4X to 8X every ten years. Hardware was cheap. The price performance continued to improve every two years. And inertia, doing business as usual was low risk – buy more hardware because we’ve always done it that way and the financial folks understand the approach.

When we evangelize the problem of I/O growing faster than hardware could cost-effectively keep up and the need for a software only approach to easily cure it, we found the problem and solution resonated with many customers – webinar attendance ranged from 400 to 2,000 attendees. And while we are fast approaching 2,000 corporate installations, there are still customers wondering why they have not heard of the I/O problem we solve and our innovative way to solve it. They want some proof.

This is where the Gartner Cool Vendor report is helpful to IT users and their organizations. The reports help focus and reduce the learning curve on the relevant problems in IT, the innovative companies that warrant further investigation and highlight interesting new products and services that address issues in emerging trends.

The Cool Vendor Report can be read in the time it takes to have a cup of coffee. Not surprisingly, the Cool Vendor Reports are one of two top reports Gartner clients download.

Now for our vanity plug, Condusiv is listed in the Cool Vendor Report titled "Cool Vendors in Storage Technologies, 2014." This is usually only available to Gartner clients, but we paid for distribution rights so you could read it for free. Download Gartner's Cool Vendors in Storage Technologies Report

When Dave Cappuccio of Gartner Speaks, People Listen

by Robert Woolery 10. April 2014 01:32

As others entered the stage, there was perfunctory applause. However, when Dave entered a roar of applause erupted, and the event ground to a halt for several minutes.

Sitting in the audience at Gartner Symposium 2013 over the next several days I saw this reaction several times.  When Dave Cappuccio, Vice President and Chief of Research at Gartner speaks, people listen. 

He made the case for the “Infinite Data Center” concept – increasing data center performance and productivity without growing beyond the current facility.  He guided the audience through a strategy that allows IT organizations to grow revenue without increasing the IT budget. 

The strategies and concepts were not only thought provoking but inspired and challenged the audience to think about their roles differently.  How IT could provide leadership to their organizations dealing with a NEXUS of forces – convergence of social, mobility, cloud and information patterns that drive new business scenarios – that are innovative and disruptive on their own but together are revolutionizing business and society, disrupting old business models and creating new leaders.  

It was clear he had insights and answers to many of the issues and he could help organizations develop strategies that would address questions like “What are the IT inhibitors to transforming the business into a growth engine?”  “What strategies can a CIO employ to overcome the challenge of transforming the business without increasing the IT budget?” and “How can IT leaders explain this value to the CEO or CFO in business terms?”

From this inspiration, this Gartner webcast was born.  Dave tackles these issues and provides a construct that came to be used to answer these questions for each customer’s unique circumstances as our featured guest.  Moreover, Dave guides IT leaders in how to manage reactions from thought leadership that pushes organizations out of their traditional comfort zones so they can transform their business into a growth engine. 

Take a look and let us know your thoughts.
http://event.on24.com/r.htm?e=767143&s=1&k=E7EA80EB703744884F03E72B1E91D455

The Real Meaning of Disruption

by Jerry Baldwin 25. March 2014 07:26

Disruption is a popular word these days. Is it the replacement for innovation, which we’ve overused into pointlessness over the past ten years? Maybe, but disruption means more: it carries the weight of shaking up the status quo—of not only creating something new—but creating something that shifts models and opens our eyes to possibility.

We talk about disruption like it’s a new thing, cooked-up by a team of marketers in a conference room somewhere. But its roots lie in a theory introduced by an Austrian economist in 1942. Joseph Schumpeter captured the world’s attention in only six pages, where he clarified the free market’s muddled way of delivering progress. He called this theory Creative Destruction.

I don’t have a PhD in economics so I’ll dodge the minutiae, but the overall meaning of the theory is what resonates—as it should to anyone engaged in business today. “Creative destruction is the same process of industrial mutation that revolutionizes the economic structure from within, incessantly destroying the old one and incessantly creating a new one.”

Simply put, to continually move forward we must be willing to embrace the destruction of what came before.

Economists often use the Montgomery Ward example. MW started as a mail-order company in the nineteenth century. With no cars or trucks in those days, and most Americans living in small rural pockets, it was too costly to ship products to widely-dispersed local stores. That model would result in a high cost to the consumer, making goods too expensive for the average buyer. So instead MW turned its enormous Chicago warehouse (complete with a railroad running through it) into its hub, and used the already well-established US mail to deliver products directly to customers at a low cost.

And a successful model it was. With a high-volume of sales, MW could charge lower prices. By the 1890s Montgomery Ward was the world’s largest retailer. But all that came to an end. Why? Because over time Americans were moving to urban centers and could afford a higher standard of living.

Robert Wood was an MW exec, and may well be the first adopter of Big Data on record. He analyzed data, studied the census, and saw the shift in American lifestyle. He presented his findings to leadership, suggesting that selling goods through a chain of urban department stores would replace the mail-order model as the most profitable path for retail.

So MW did what any unmovable enterprise would do. They fired him.

Meanwhile James Cash Penney recognized the same trends as Robert Wood, and it wasn’t long before J.C. Penney stores put a serious dent in MW’s profits. The mail-order giant was late to the party and couldn’t change direction fast enough. And Robert Wood? He went to work for Sears, who also took a big bite out of MW’s market share.

Remind you of anything? Netflix and Blockbuster. Blockbuster was the established enterprise, staring streaming revenue in the face, yet unable to let go of profits from the rental market. Netflix is the newcomer—the creative destructor—free from the ball and chain of a dying business model, free to focus 100% on new streaming revenue. And we all know the end of that story.

We also know that business is anything but stagnant, there are waves and cycles, and the same is true of companies. It’s very difficult (impossible?) for an established enterprise to turn around, to be truly disruptive, and to compete with newcomers. But if you’re an organization that can stomach the destruction of what came before to create new growth and opportunity, you might stand a chance.

Condusiv is unique. We’re a software company with a 30-year history as an established player in file system management and caching technologies. But in the spirit of disruption—of creative destruction—we’ve shifted all our focus and resources to V-locity, our flagship product that in itself signifies disruption in how organizations run their data centers. A 100% software approach to solving today’s application performance issues at the root on the VM or OS layer, without requiring any additional hardware.

When you embrace creative destruction you must ceaselessly devalue existing wealth in order to clear the ground for the creation of new wealth.

It’s true, our discussions may be very different from those had in other conference rooms. But that’s what disruption should be about—that’s how Condusiv can deliver performance in a software solution at 1/10th the cost of the hardware alternative. While others go for margin erosion—trying to win business by saving customers 20 cents on every dollar spent, we help customers save 90 cents on every dollar.

As a channel partner, we allow you to fulfill your mission to your customers with added value, while also protecting your bottom line with a generous software margin that grows your profit over razor thin commodity hardware margins. You win. Your customer wins. A new market is emerging where hardware becomes the 2nd option for improved application performance.

Welcome to our conference room.

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar