Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

SAN Fragmentation Controversy Incites Attack from NetApp

by Brian Morin 22. April 2015 08:38

I can’t blame it all on NetApp.

It all started with THIS TWEET.

I’ll admit NetApp was misled by an inadvertent title from a well-intentioned editor at Searchstorage.com, but the CTO Office at NetApp obviously didn’t read the whole article beyond the headline.

Dave Raffo, the Senior News Director for Search Storage wrote a feature article on the FUD surrounding SAN and fragmentation and how the latest release of Diskeeper® 15 Server eliminates performance-robbing fragmentation without “defragging.” In fact, here is one of Dave’s direct quotes from the article:

“It’s not defragging disks in SAN arrays, but preventing files from being broken into pieces before being written to hard disk drives or solid-state drives non-sequentially. That way, it prevents fragmentation before it becomes an issue.”

Unfortunately, the word “defrag” was inadvertently put into the headline and opening sentence of the article, which triggered a knee jerk reaction from the CTO Office at NetApp who tweeted, “NetApp does not recommend using defragmentation software on our kit, period.”

Attention all SAN vendors: Diskeeper 15 Server is not a “defrag” utility! 

Diskeeper 15 proactively prevents fragmentation from occurring in the first place at the Windows file system level. As a result, Diskeeper is not competing with RAID controllers for physical block management or triggering copy-on-write activity by moving blocks like a traditional “defrag.” Instead, Diskeeper 15 Server complements the SAN by making sure it no longer receives small, fractured random I/O from Windows. This patented approach reduces the IOPS requirement for any given workload and improves throughput on existing systems from physical server to storage, so administrators can get more from the systems they have by moving more data in less time. 

We find it takes someone a few minutes to stop thinking in terms of “defragging” after-the-fact and start thinking in terms of how much a SAN can benefit from fragmentation prevention. Here’s some recent media coverage snippets that explain:

“As Condusiv demonstrates, the level of fragmentation of the logical disk inflates the IOPS requirement for any given workload with a surplus of small, fractured I/O. While part of the performance problem can be hidden behind high performance flash, a high fragmented environment wastes much of the investment in flash.” – Storage Switzerland, full article ->

“Businesses that are switching over to flash arrays should see a benefit as well. Since fragmentation at the logical layer is inherent in the fabric of Windows, the flash technology will still have a higher I/O overhead. Diskeeper will help organizations switching to flash get the most for their investments.” – Storage Review, full article ->

“Diskeeper 15 Server is the first fragmentation protection for SAN storage connected to physical servers. It prevents fragmentation in real time at the logical disk layer, increasing IO density so more data can be processed.” – Channel Buzz, full article ->

“Condusiv's Diskeeper 15 extends the benefits of defragmentation out to the SAN with the novel technique of reducing fragmentation before the data leaves the server.” – Tom’s IT Pro, full article ->

“Condusiv has added a new twist to its Diskeeper line…by tackling for the first time the question of how to defragment SAN storage.” – CRN, full article ->

Keep in mind, those who want to make sure their virtualized workloads connected to SAN are optimized as well can use V-locity® I/O reduction for virtual servers. With V-locity, users get the added benefit of server-side DRAM caching to further reduce I/O to storage and satisfy data even quicker.

Tags: , , , ,

Diskeeper

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

ER Can’t Afford Slow Patient Records

by Brian Morin 1. March 2015 05:36

Slow medical record load times were hurting ER and overall patient care hospital-wide.

Ryan Barker was responsible for overseeing the MEDITECH EHR systems at Hancock Regional. As he put it, “You can imagine how dire the situation can be, particularly in the ER. My users can’t spare precious seconds waiting for data to load and records to save.”

Ryan was considering doing what most admins do when facing a performance issue - throw more hardware at the problem. While considering a forklift upgrade of the SAN that wasn’t in budget, Ryan had heard about what V-locity® I/O reduction software had done to accelerate EHR at other MEDITECH hospitals.

Since V-locity is free to evaluate and tailored specifically for MEDITECH, he figured he had nothing to lose.

Here’s a direct quote from the full case study:

Over the first two weeks of running V-locity on all the MEDITECH VMs, Ryan requested several tests from the Core Team, all reporting impressive results. “The feedback I’m getting from the Team is very positive; they are seeing major improvement,” says Ryan. “Before V-locity, the EDM tracker took seven seconds to load two patients. With V-locity it’s now taking only four seconds to load six patients.” Ryan continues, “Compiling a list of 16 patients took 13 seconds. With V-locity it now takes only five seconds to load 19. That’s a major improvement when you’re talking about a busy day in the ER.”

Hancock Regional was able to put an end to performance issues, defer upgrading their SAN and now looking to deploy V-locity on other I/O intensive applications. In addition, since V-locity comes with the MediWrite™ engine for MEDITECH, the FAL growth issue from severe fragmentation is no longer a risk to downtime. MEDITECH requires all 5x and 6x users to have a FAL remediation plan and recommends V-locity for its real-time and automatic FAL remediation capabilities.

Read the full case study ->

Tags: , ,

MEDITECH | V-Locity

The Biggest Missed Culprit in SQL Performance Troubleshooting

by Brian Morin 18. February 2015 09:53

"We didn't know how much of our SQL performance was being dampened by the nasty 'I/O blender' effect….."

As it turned out, it was HALF. 

That's right. Their systems were processing HALF as many MB/sec than they should due to the noise of all their VM workloads meeting and mixing at the point of the hypervisor. The first thing the "I/O blender" effect does is tax throughput, so your application performance becomes far more dependent on storage IOPS than it needs to be.

Read the full story how I.B.I.S., Inc. doubled performance of their CRM and ERP by eliminating the I/O
blender effect ->
 

So what is the "I/O blender" effect and how is it taxing application performance? 

The "I/O blender" effect is a phenomena specific to a virtual server environment where the I/O streams from disparate VMs are "funneled" together at the point of the hypervisor before sending out to storage a very random I/O stream that penalizes overall application performance.

Every organization that has virtualized has experienced this pain. They virtualized their applications only to discover mounting I/O pressure on the backend storage infrastructure. This was the unintended consequence of virtualization. Organizations save costs on the compute layer via virtualization only to trade those savings to backend storage where a forklift upgrade is necessary to handle the new random I/O demand.

In the case of I.B.I.S., Inc., their IT Director wanted to look into this problem a little further to see what could be done before reactively buying more storage hardware for improved performance.

"We wanted to try V-locity® I/O reduction software first to see if it could tackle the root cause problem as advertised at the VM level where I/O originates," said Kevin Schmidt, IT Director.

As much as IT departments lack monitoring tools that show exactly how much performance is dampened by the "I/O blender" effect, V-locity comes with an embedded benchmark to give a before/after picture of I/O reduction and demonstrate how much performance is improved by combatting this problem at the Windows operating system layer.

As it turned out, I.B.I.S., Inc.'s heaviest SQL workloads saw a 120% improvement in data throughput. Before V-locity, it took 82,000 I/Os to process 1GB of data. After V-locity, that number was cut to 29,000 I/Os per GB. Due to the increase in I/O density, instead of taking .78 minutes to process 1GB, it now only takes .36 minutes.

"Since we're no longer dealing with so many small split I/Os and random I/O streams, V-locity has enabled our CRM and ERP systems to process twice the amount of data in the same amount of time. The best part is that we didn't have to spend a single dime on expensive new hardware to get that performance," said Schmidt.

Read the full case study ->

Tags: , , , ,

Disruption, Application Performance, IOPS | virtualization | V-Locity

SQL Batch Job Hell

by Brian Morin 1. October 2014 04:16

ASL was in SQL batch job hell.

A regular import of 150 million records into their SQL database would take 27 hours to complete.

ASL’s account team and clients needed access to the most current data immediately, but the 27 hour batch job meant that access would slip a full day of production or even two. That wasn’t acceptable as some clients would hold back business while waiting on new data to come online.

“Typically, IT professionals respond to application performance issues by reactively buying more hardware. Without the luxury of a padded budget, we needed to find a way to improve performance on the hardware infrastructure we already have,” said Ralph Ortiz, IT Manager, ASL Marketing.

ASL upgraded their network to 10GbE and was looking at either a heavy investment in SSD or doing a full rip-and-replace of the SAN architecture before its full lifecycle. Since that kind of hardware investment wasn’t in the budget, they decided to take a look at V-locity® I/O reduction software.

“I was very doubtful that V-locity could improve my I/O performance through a software-only solution. But with nothing to lose, we evaluated V-locity on our SQL servers and were amazed to see that, literally overnight, we doubled throughput from server to storage and cut our SQL batch job times in half,” said Ortiz.

After deploying V-locity, SQL batch jobs that used to take 27 hours to complete now take 12–14 hours to complete. The weekly college database import that used to take 17 hours to complete is now down to 7 hours.

Read the full case study – ASL Doubles Throughput with V-locity I/O Reduction Software

RecentComments

Comment RSS

Month List

Calendar

<<  May 2015  >>
MoTuWeThFrSaSu
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar