Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

The Biggest Missed Culprit in SQL Performance Troubleshooting

by Brian Morin 18. February 2015 09:53

"We didn't know how much of our SQL performance was being dampened by the nasty 'I/O blender' effect….."

As it turned out, it was HALF. 

That's right. Their systems were processing HALF as many MB/sec than they should due to the noise of all their VM workloads meeting and mixing at the point of the hypervisor. The first thing the "I/O blender" effect does is tax throughput, so your application performance becomes far more dependent on storage IOPS than it needs to be.

Read the full story how I.B.I.S., Inc. doubled performance of their CRM and ERP by eliminating the I/O
blender effect ->
 

So what is the "I/O blender" effect and how is it taxing application performance? 

The "I/O blender" effect is a phenomena specific to a virtual server environment where the I/O streams from disparate VMs are "funneled" together at the point of the hypervisor before sending out to storage a very random I/O stream that penalizes overall application performance.

Every organization that has virtualized has experienced this pain. They virtualized their applications only to discover mounting I/O pressure on the backend storage infrastructure. This was the unintended consequence of virtualization. Organizations save costs on the compute layer via virtualization only to trade those savings to backend storage where a forklift upgrade is necessary to handle the new random I/O demand.

In the case of I.B.I.S., Inc., their IT Director wanted to look into this problem a little further to see what could be done before reactively buying more storage hardware for improved performance.

"We wanted to try V-locity® I/O reduction software first to see if it could tackle the root cause problem as advertised at the VM level where I/O originates," said Kevin Schmidt, IT Director.

As much as IT departments lack monitoring tools that show exactly how much performance is dampened by the "I/O blender" effect, V-locity comes with an embedded benchmark to give a before/after picture of I/O reduction and demonstrate how much performance is improved by combatting this problem at the Windows operating system layer.

As it turned out, I.B.I.S., Inc.'s heaviest SQL workloads saw a 120% improvement in data throughput. Before V-locity, it took 82,000 I/Os to process 1GB of data. After V-locity, that number was cut to 29,000 I/Os per GB. Due to the increase in I/O density, instead of taking .78 minutes to process 1GB, it now only takes .36 minutes.

"Since we're no longer dealing with so many small split I/Os and random I/O streams, V-locity has enabled our CRM and ERP systems to process twice the amount of data in the same amount of time. The best part is that we didn't have to spend a single dime on expensive new hardware to get that performance," said Schmidt.

Read the full case study ->

Tags: , , , ,

Disruption, Application Performance, IOPS | virtualization | V-Locity

SQL Batch Job Hell

by Brian Morin 1. October 2014 04:16

ASL was in SQL batch job hell.

A regular import of 150 million records into their SQL database would take 27 hours to complete.

ASL’s account team and clients needed access to the most current data immediately, but the 27 hour batch job meant that access would slip a full day of production or even two. That wasn’t acceptable as some clients would hold back business while waiting on new data to come online.

“Typically, IT professionals respond to application performance issues by reactively buying more hardware. Without the luxury of a padded budget, we needed to find a way to improve performance on the hardware infrastructure we already have,” said Ralph Ortiz, IT Manager, ASL Marketing.

ASL upgraded their network to 10GbE and was looking at either a heavy investment in SSD or doing a full rip-and-replace of the SAN architecture before its full lifecycle. Since that kind of hardware investment wasn’t in the budget, they decided to take a look at V-locity® I/O reduction software.

“I was very doubtful that V-locity could improve my I/O performance through a software-only solution. But with nothing to lose, we evaluated V-locity on our SQL servers and were amazed to see that, literally overnight, we doubled throughput from server to storage and cut our SQL batch job times in half,” said Ortiz.

After deploying V-locity, SQL batch jobs that used to take 27 hours to complete now take 12–14 hours to complete. The weekly college database import that used to take 17 hours to complete is now down to 7 hours.

Read the full case study – ASL Doubles Throughput with V-locity I/O Reduction Software

$2 Million Cancelled

by Brian Morin 22. July 2014 08:52

CHRISTUS Health cancelled a $2 Million order.

Just before they pulled the trigger on a $2 Million storage purchase to improve the performance of their electronic health records application (MEDITECH®), they evaluated V-locity® I/O reduction software.

We actually heard the story first hand from the NetApp® reseller in the deal at a UBM Xchange conference. He thought he had closed the $2 Million deal only to find out that CHRISTUS was doing some testing with V-locity. After getting the news that the storage order would not be placed, he met us at Xchange to find out more about V-locity since "this V-locity stuff is for real."

After an initial conversation with anyone about V-locity, the first response is generally the same – skepticism. Can software alone really accelerate the applications in my virtual environment? Since we are conditioned to think only new hardware upgrades can solve performance bottlenecks, organizations end up with spiraling data center costs without any other option except to throw more hardware at the problem.

CHRISTUS Health, like many others, approached us with the same skepticism. But after virtualizing 70+ servers for their EHR application, they noticed a severe performance hit from the “I/O blender” effect. They needed a solution to solve the problem, not just more hardware to medicate the problem on the backend.

Since V-locity comes with an embedded performance benchmark that provides the I/O profile of any VM workload, it makes it easy to see a before/after comparison in real-world environments.

After evaluation, not only did CHRISTUS realize they were able to double their medical records performance, but after trying V-locity on their batch billing job, they dropped a painful 20 hour job down to 12 hours.

In addition to performance gains, V-locity also provides a special benefit to MEDITECH users by eliminating excessive file fragmentation that can cause the File Attribute List (FAL) to reach its size limit and degrade performance further or even threaten availability.

Tom Swearingen, the manager of Infrastructure Services at CHRISTUS Health said it best. "We are constantly scrutinizing our budget, so anything that helps us avoid buying more storage hardware for performance or host-related infrastructure is a huge benefit."

Read the full case study – CHRISTUS Health Doubles Electronic Health Record Performance with V-locity I/O Reduction Software

The Gartner Cool Vendor Report in Storage Technologies: Vanity or Value

by Robert Woolery 22. April 2014 08:58

We all like lists that rank who is cool, best in class or top score in a buyer’s guide. Every year, Gartner releases their prized "Cool Vendor" selection. But is it just vanity for the vendor selected or is there actual, tangible value to the prospective customer that makes you care?

We believe one significant difference about the Cool Vendor Report compared to other reports is Gartner does a deep-dive examination of compelling vendors across the technology landscape, then upon selecting their "cool vendors" for the year, they reveal their analysis, why the vendor is cool, challenges the vendor faces and who should care.

Of all the technology companies on the landscape, Gartner chose to highlight four this year in the area of storage technologies, providing research into their innovative products and/or services.

When we were brainstorming our flagship product V-locity, we spoke to hundreds customers and we heard a common theme – performance problems in virtual environments whereby users were buying lots of hardware to solve an inherent software problem per the "I/O blender" effect.

As we dug in, a clearer picture emerged. We've become conditioned to medicating performance problems with hardware. And why not? In the past, performance gains were growing by 4X to 8X every ten years. Hardware was cheap. The price performance continued to improve every two years. And inertia, doing business as usual was low risk – buy more hardware because we’ve always done it that way and the financial folks understand the approach.

When we evangelize the problem of I/O growing faster than hardware could cost-effectively keep up and the need for a software only approach to easily cure it, we found the problem and solution resonated with many customers – webinar attendance ranged from 400 to 2,000 attendees. And while we are fast approaching 2,000 corporate installations, there are still customers wondering why they have not heard of the I/O problem we solve and our innovative way to solve it. They want some proof.

This is where the Gartner Cool Vendor report is helpful to IT users and their organizations. The reports help focus and reduce the learning curve on the relevant problems in IT, the innovative companies that warrant further investigation and highlight interesting new products and services that address issues in emerging trends.

The Cool Vendor Report can be read in the time it takes to have a cup of coffee. Not surprisingly, the Cool Vendor Reports are one of two top reports Gartner clients download.

Now for our vanity plug, Condusiv is listed in the Cool Vendor Report titled "Cool Vendors in Storage Technologies, 2014." This is usually only available to Gartner clients, but we paid for distribution rights so you could read it for free. Download Gartner's Cool Vendors in Storage Technologies Report

RecentComments

Comment RSS

Month List

Calendar

<<  April 2015  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar