Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

V-locity 6.0 Solves Death by a Thousand Cuts in Virtual Environments

by Brian Morin 12. August 2015 08:04

If you haven’t already heard the pre-announcement buzz on V-locity® 6.0 I/O reduction software that made a splash in the press, it’s being released in a couple weeks. To understand why it’s significant and why it’s an unprecedented 3X FASTER than its predecessor is to understand the biggest factor that dampens application performance the most in virtual environments - the problem of increasingly smaller, fractured, and random I/O. That kind of I/O profile is akin to pouring molasses on compute and storage systems. Processing I/O with those characteristics makes systems work much harder than necessary to process any given workload. Virtualized organizations stymied by sluggish performance related to their most I/O intensive applications suffer in large part to a problem that we call “death by a thousand cuts” – I/O that is smaller, more fractured, and more random than it needs to be.

Organizations tend to overlook solving the problem and reactively attempt to mask the problem with more spindles or flash or a forklift storage upgrade. Unfortunately, this approach wastes much of any new investment in flash since optimal performance is being robbed by I/O inefficiencies at the Windows OS layer and also at the hypervisor layer.

V-locity® version 6 has been built from the ground-up to help organizations solve their toughest application performance challenges without new hardware. This is accomplished by optimizing the I/O profile for greater throughput while also targeting the smallest, random I/O that is cached from available DRAM to reduce latency and rid the infrastructure of the kind of I/O that penalizes performance the most.

Although much is made about V-locity’s patented IntelliWrite® engine that increases I/O density and sequentializes writes, special attention was put into V-locity’s DRAM read caching engine (IntelliMemory®) that is now 3X more efficient in version 6 due to changes in the behavioral analytics engine that focuses on "caching effectiveness" instead of "cache hits.”

Leveraging available server-side DRAM for caching is very different than leveraging a dedicated flash resource for cache whether that be PCI-e or SSD. Although DRAM isn’t capacity intensive, it is exponentially faster than a PCI-e or SSD cache sitting below it, which makes it the ideal tier for the first caching tier in the infrastructure. The trick is in knowing how to best use a capacity-limited but blazing fast storage medium.

Commodity algorithms that simply look at characteristics like access frequency might work for  capacity intensive caches, but it doesn’t work for DRAM. V-locity 6.0 determines the best use of DRAM for caching purposes by collecting data on a wide range of data points (storage access, frequency, I/O priority, process priority, types of I/O, nature of I/O (sequential or random), time between I/Os) - then leverages its analytics engine to identify which storage blocks will benefit the most from caching, which also reduces "cache churn" and the repeated recycling of cache blocks. By prioritizing the smallest, random I/O to be served from DRAM, V-locity eliminates the most performance robbing I/O from traversing the infrastructure. Administrators don’t need to be concerned about carving out precious DRAM for caching purposes as V-locity dynamically leverages available DRAM. With a mere 4GB of RAM per VM, we’ve seen gains from 50% to well over 600%, depending on the I/O profile.

With V-locity 5, we examined data from 2576 systems that tested V-locity and shared their before/after data with Condusiv servers. From that raw data, we verified that 43% of all systems experienced greater than 50% reduction in latency on reads due to IntelliMemory. While that’s a significant number in its own right by simply using available DRAM, we can’t wait to see how that number jumps significantly for our customers with V-locity 6.

Internal Iometer tests reveal that the latest version of IntelliMemory in V-locity 6.0 is 3.6X faster when processing 4K blocks and 2.0X faster when processing 64K blocks.

Jim Miller, Senior Analyst, Enterprise Management Associates had this to say, "V-locity version 6.0 makes a very compelling argument for server-side DRAM caching by targeting small, random I/O - the culprit that dampens performance the most. This approach helps organizations improve business productivity by better utilizing the available DRAM they already have. However, considering the price evolution of DRAM, its speed, and proximity to the processor, some organizations may want to add additional memory for caching if they have data sets hungry for otherworldly performance gains."

Finally, one of our customers, Rich Reitenauer, Manager of Infrastructure Management and Support, Alvernia University, had this to say, "Typical IT administrators respond to application performance issues by reactively throwing more expensive server and storage hardware at them, without understanding what the real problem is. Higher education budgets can't afford that kind of brute-force approach. By trying V-locity I/O reduction software first, we were able to double the performance of our LMS app sitting on SQL, stop all complaints about performance, stop the application from timing out on students, and avoid an expensive forklift hardware upgrade."

For more on the I/O Inefficiencies that V-locity solves, read Storage Switzerland’s Briefing on V-locity 6.0 ->

3 Min Video on SAN Misconceptions Regarding Fragmentation

by Brian Morin 23. June 2015 08:56

In just 3 minutes, George Crump, Sr Analyst at Storage Switzerland, explains the real problem around fragmentation and SAN storage, debunks misconceptions, and describes what organizations are doing about it. It should be noted, that even though he is speaking about the Windows OS on physical servers, the problem is the same for virtual servers connected to SAN storage. Watch ->

In conversations we have with SAN storage administrators and even storage vendors, it usually takes some time for someone to realize that performance-robbing Windows fragmentation does occur, but the problem is not what you think. It has nothing to do with the physical layer under SAN management or latency from physical disk head movement. 

When people think of fragmentation, they typically think in the context of physical blocks on a mechanical disk. However, in a SAN environment, the Windows OS is abstracted from the physical layer. The Windows OS manages the logical disk software layer and the SAN manages how the data is physically written to disk or solid-state.

What this means is that the SAN device has no control or influence on how data is written to the logical disk. In the video, George Crump describes how fragmentation is inherent to the fabric of Windows and what actually happens when a file is written to the logical disk in a fragmented manner – I/Os become fractured and it takes more I/O than necessary to process any given file. As a result, SAN systems are overloaded with a small, fractured, random I/O, which dampens overall performance. The I/O overhead from a fragmented logical disk impacts SAN storage populated with flash equally as much as a system populated with disk.

The video doesn’t have time to go into why this actually happens, so here is a brief explanation: 

Since the Windows OS takes a one-size-fits-all approach to all environments, the OS is not aware of file sizes. What that means is the OS does not look for the proper size allocation within the logical disk when writing or extending a file. It simply looks for the next available allocation. If the available address is not large enough, the OS splits the file and looks for the next available address, fills, and splits again until the whole file is written. The resulting problem in a SAN environment with flash or disk is that a dedicated I/O operation is required to process every piece of the file. In George’s example, it could take 25 I/O operations to process a file that could have otherwise been processed with a single I/O. We see customer examples of severe fragmentation where a single file has been fractured into thousands of pieces at the logical layer. It’s akin to pouring molasses on a SAN system.

Since a defragmentation process only deals with the problem after-the-fact and is not an option on a modern, production SAN without taking it offline, Condusiv developed its patented IntelliWrite® technology within both Diskeeper® and V-locity® that prevents I/Os from fracturing in the first place. IntelliWrite provides intelligence to the Windows OS to help it find the proper size allocation within the logical disk instead of the next available allocation. This enables files to be written (and read) in a more contiguous and sequential manner, so only minimum I/O is required of any workload from server to storage. This increases throughput on existing systems so organizations can get peak performance from the SSDs or mechanical disks they already have, and avoid overspending on expensive hardware to combat performance problems that can be so easily solved.

Tags: , , , , ,

SAN Fragmentation Controversy Incites Attack from NetApp

by Brian Morin 22. April 2015 08:38

I can’t blame it all on NetApp.

It all started with THIS TWEET.

I’ll admit NetApp was misled by an inadvertent title from a well-intentioned editor at Searchstorage.com, but the CTO Office at NetApp obviously didn’t read the whole article beyond the headline.

Dave Raffo, the Senior News Director for Search Storage wrote a feature article on the FUD surrounding SAN and fragmentation and how the latest release of Diskeeper® 15 Server eliminates performance-robbing fragmentation without “defragging.” In fact, here is one of Dave’s direct quotes from the article:

“It’s not defragging disks in SAN arrays, but preventing files from being broken into pieces before being written to hard disk drives or solid-state drives non-sequentially. That way, it prevents fragmentation before it becomes an issue.”

Unfortunately, the word “defrag” was inadvertently put into the headline and opening sentence of the article, which triggered a knee jerk reaction from the CTO Office at NetApp who tweeted, “NetApp does not recommend using defragmentation software on our kit, period.”

Attention all SAN vendors: Diskeeper 15 Server is not a “defrag” utility! 

Diskeeper 15 proactively prevents fragmentation from occurring in the first place at the Windows file system level. As a result, Diskeeper is not competing with RAID controllers for physical block management or triggering copy-on-write activity by moving blocks like a traditional “defrag.” Instead, Diskeeper 15 Server complements the SAN by making sure it no longer receives small, fractured random I/O from Windows. This patented approach reduces the IOPS requirement for any given workload and improves throughput on existing systems from physical server to storage, so administrators can get more from the systems they have by moving more data in less time. 

We find it takes someone a few minutes to stop thinking in terms of “defragging” after-the-fact and start thinking in terms of how much a SAN can benefit from fragmentation prevention. Here’s some recent media coverage snippets that explain:

“As Condusiv demonstrates, the level of fragmentation of the logical disk inflates the IOPS requirement for any given workload with a surplus of small, fractured I/O. While part of the performance problem can be hidden behind high performance flash, a high fragmented environment wastes much of the investment in flash.” – Storage Switzerland, full article ->

“Businesses that are switching over to flash arrays should see a benefit as well. Since fragmentation at the logical layer is inherent in the fabric of Windows, the flash technology will still have a higher I/O overhead. Diskeeper will help organizations switching to flash get the most for their investments.” – Storage Review, full article ->

“Diskeeper 15 Server is the first fragmentation protection for SAN storage connected to physical servers. It prevents fragmentation in real time at the logical disk layer, increasing IO density so more data can be processed.” – Channel Buzz, full article ->

“Condusiv's Diskeeper 15 extends the benefits of defragmentation out to the SAN with the novel technique of reducing fragmentation before the data leaves the server.” – Tom’s IT Pro, full article ->

“Condusiv has added a new twist to its Diskeeper line…by tackling for the first time the question of how to defragment SAN storage.” – CRN, full article ->

Keep in mind, those who want to make sure their virtualized workloads connected to SAN are optimized as well can use V-locity® I/O reduction for virtual servers. With V-locity, users get the added benefit of server-side DRAM caching to further reduce I/O to storage and satisfy data even quicker.

Tags: , , , ,

Diskeeper

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

ER Can’t Afford Slow Patient Records

by Brian Morin 1. March 2015 05:36

Slow medical record load times were hurting ER and overall patient care hospital-wide.

Ryan Barker was responsible for overseeing the MEDITECH EHR systems at Hancock Regional. As he put it, “You can imagine how dire the situation can be, particularly in the ER. My users can’t spare precious seconds waiting for data to load and records to save.”

Ryan was considering doing what most admins do when facing a performance issue - throw more hardware at the problem. While considering a forklift upgrade of the SAN that wasn’t in budget, Ryan had heard about what V-locity® I/O reduction software had done to accelerate EHR at other MEDITECH hospitals.

Since V-locity is free to evaluate and tailored specifically for MEDITECH, he figured he had nothing to lose.

Here’s a direct quote from the full case study:

Over the first two weeks of running V-locity on all the MEDITECH VMs, Ryan requested several tests from the Core Team, all reporting impressive results. “The feedback I’m getting from the Team is very positive; they are seeing major improvement,” says Ryan. “Before V-locity, the EDM tracker took seven seconds to load two patients. With V-locity it’s now taking only four seconds to load six patients.” Ryan continues, “Compiling a list of 16 patients took 13 seconds. With V-locity it now takes only five seconds to load 19. That’s a major improvement when you’re talking about a busy day in the ER.”

Hancock Regional was able to put an end to performance issues, defer upgrading their SAN and now looking to deploy V-locity on other I/O intensive applications. In addition, since V-locity comes with the MediWrite™ engine for MEDITECH, the FAL growth issue from severe fragmentation is no longer a risk to downtime. MEDITECH requires all 5x and 6x users to have a FAL remediation plan and recommends V-locity for its real-time and automatic FAL remediation capabilities.

Read the full case study ->

Tags: , ,

MEDITECH | V-Locity

RecentComments

Comment RSS

Month List

Calendar

<<  August 2015  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar