Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

V-locity 6.0 Solves Death by a Thousand Cuts in Virtual Environments

by Brian Morin 12. August 2015 08:04

If you haven’t already heard the pre-announcement buzz on V-locity® 6.0 I/O reduction software that made a splash in the press, it’s being released in a couple weeks. To understand why it’s significant and why it’s an unprecedented 3X FASTER than its predecessor is to understand the biggest factor that dampens application performance the most in virtual environments - the problem of increasingly smaller, fractured, and random I/O. That kind of I/O profile is akin to pouring molasses on compute and storage systems. Processing I/O with those characteristics makes systems work much harder than necessary to process any given workload. Virtualized organizations stymied by sluggish performance related to their most I/O intensive applications suffer in large part to a problem that we call “death by a thousand cuts” – I/O that is smaller, more fractured, and more random than it needs to be.

Organizations tend to overlook solving the problem and reactively attempt to mask the problem with more spindles or flash or a forklift storage upgrade. Unfortunately, this approach wastes much of any new investment in flash since optimal performance is being robbed by I/O inefficiencies at the Windows OS layer and also at the hypervisor layer.

V-locity® version 6 has been built from the ground-up to help organizations solve their toughest application performance challenges without new hardware. This is accomplished by optimizing the I/O profile for greater throughput while also targeting the smallest, random I/O that is cached from available DRAM to reduce latency and rid the infrastructure of the kind of I/O that penalizes performance the most.

Although much is made about V-locity’s patented IntelliWrite® engine that increases I/O density and sequentializes writes, special attention was put into V-locity’s DRAM read caching engine (IntelliMemory®) that is now 3X more efficient in version 6 due to changes in the behavioral analytics engine that focuses on "caching effectiveness" instead of "cache hits.”

Leveraging available server-side DRAM for caching is very different than leveraging a dedicated flash resource for cache whether that be PCI-e or SSD. Although DRAM isn’t capacity intensive, it is exponentially faster than a PCI-e or SSD cache sitting below it, which makes it the ideal tier for the first caching tier in the infrastructure. The trick is in knowing how to best use a capacity-limited but blazing fast storage medium.

Commodity algorithms that simply look at characteristics like access frequency might work for  capacity intensive caches, but it doesn’t work for DRAM. V-locity 6.0 determines the best use of DRAM for caching purposes by collecting data on a wide range of data points (storage access, frequency, I/O priority, process priority, types of I/O, nature of I/O (sequential or random), time between I/Os) - then leverages its analytics engine to identify which storage blocks will benefit the most from caching, which also reduces "cache churn" and the repeated recycling of cache blocks. By prioritizing the smallest, random I/O to be served from DRAM, V-locity eliminates the most performance robbing I/O from traversing the infrastructure. Administrators don’t need to be concerned about carving out precious DRAM for caching purposes as V-locity dynamically leverages available DRAM. With a mere 4GB of RAM per VM, we’ve seen gains from 50% to well over 600%, depending on the I/O profile.

With V-locity 5, we examined data from 2576 systems that tested V-locity and shared their before/after data with Condusiv servers. From that raw data, we verified that 43% of all systems experienced greater than 50% reduction in latency on reads due to IntelliMemory. While that’s a significant number in its own right by simply using available DRAM, we can’t wait to see how that number jumps significantly for our customers with V-locity 6.

Internal Iometer tests reveal that the latest version of IntelliMemory in V-locity 6.0 is 3.6X faster when processing 4K blocks and 2.0X faster when processing 64K blocks.

Jim Miller, Senior Analyst, Enterprise Management Associates had this to say, "V-locity version 6.0 makes a very compelling argument for server-side DRAM caching by targeting small, random I/O - the culprit that dampens performance the most. This approach helps organizations improve business productivity by better utilizing the available DRAM they already have. However, considering the price evolution of DRAM, its speed, and proximity to the processor, some organizations may want to add additional memory for caching if they have data sets hungry for otherworldly performance gains."

Finally, one of our customers, Rich Reitenauer, Manager of Infrastructure Management and Support, Alvernia University, had this to say, "Typical IT administrators respond to application performance issues by reactively throwing more expensive server and storage hardware at them, without understanding what the real problem is. Higher education budgets can't afford that kind of brute-force approach. By trying V-locity I/O reduction software first, we were able to double the performance of our LMS app sitting on SQL, stop all complaints about performance, stop the application from timing out on students, and avoid an expensive forklift hardware upgrade."

For more on the I/O Inefficiencies that V-locity solves, read Storage Switzerland’s Briefing on V-locity 6.0 ->

Do you need to defragment your SAN?

by Michael 11. January 2011 13:04

I recently came across an older article about defragmenting SANs (read it here). It includes interviews with analysts, SAN vendors (some pro-defrag, some against), and an employee from Diskeeper Corporation.

I was particulary impressed with the EMC'ers response:

"The SAN can't do anything about the fact that Windows sees the file in 30 bits," said Wambach. "That's really something that is happening outside of the storage realm."

He highlights the abstraction perfectly.  SAN vendors claim that a defragmenter cannot correct fragmentation due to the fact it is abstracted from the physical blocks. We absolutely agree with this statement. And for that same reason, SANs cannot fix fragmentation in the NTFS file system, which causes excess and unnecessary overhead on the OS.

 

Defragmenting IT Healthcare

by Michael 20. December 2010 05:18

Joe Marion is founder and Principal of Healthcare Integration Strategies, specializing in the integration of imaging technologies with the overall healthcare IT landscape. His blog (at Healthcare Informatics) covers challenges and opportunities specifically relevant to optimizing Healthcare IT initiatives.

Medical images are a significant percentage of the the world's storage requirements, and have been predicted to encompass an even greater percentage of future storage demand. In Joe's recent blog post he posed the question "Is Defragmentation a Boon to Healthcare IT Performance?"

In his post he includes personal observations and insight into performance implications fragmentation can incur on IT as healthcare departments themselves consolidate and standardize application use:

"With departmental solutions, there very likely was less emphasis on system tools such as defragmentation applications.  Now that PACS technology is becoming more intertwined with the rest of IT, there should be greater emphasis on inclusion of these tools.  In addition, server virtualization can mean that previously independent applications are now part of a virtual server farm."

He also makes the astute observation that centralizing computing and storage magnifies bottlenecks, making a solution such as defragmentation increasingly more vital:

"The addition of disk-intensive applications such as speech recognition and imaging could potentially impact the overall performance of these applications.  As data storage requirements within healthcare grow, the problem will potentially get worse.  Think of the consequence of managing multiple 3000-slice CT studies and performing multiple 3D analyses.  As more advanced visualization applications go the client-server route, the performance of a central server doing the 3D processing could be significantly impacted."

You can read Joe's blog here.

  

Tags: , , ,

Defrag | Diskeeper | IntelliWrite | V-Locity

New White Paper Urges Defrag for Virtual Environments

by Colleen Toumayan 27. September 2010 09:27

A new white paper, The Importance of Defragmentation in Virtual Environments, co-authored by Osterman Research and Diskeeper Corporation, demonstrates that virtual environments require defragmenting even more than physical environments. This is due to the fact that virtual environments support multiple operating systems and create a higher intensity of disk activity. 

“The need for defragmentation is even more acute in virtual environments,” the white paper states. “This is because physical hardware in a virtualized storage environment must support more operating systems and so can undergo even more disk access and more stress than in a non-virtualized environment. Further, disk I/O in one virtual machine has a cascading effect on disk I/O in other virtual machines, and so the problem of excessive disk I/O in virtual machines is, in fact, even worse than what would be experienced in a physical disk environment.” 

The white paper indicates that fragmentation, which reduces system performance in a physical storage infrastructure, can even create more of a performance loss in a virtual storage infrastructure. Virtual disks can become fragmented over time just like the physical disk or disks on which they reside. The result is a fragmented virtual disk on a fragmented physical disk—or fragmentation within fragmentation.  This data is especially important in light of the rapid growth of virtual environments. Organizations are particularly interested in virtualization due to its many benefits, which include reduced hardware costs, ease of adding additional capacity to existing infrastructure, ease of administration and maintenance, and simplified migration from one server to another.  Because of the complexity of I/O traffic in virtual environments, simple defragmentation is not enough to fully address the fragmentation issue. For that reason, Diskeeper Corporation has developed new technology for virtual environments, found in their V-locity™ 2.0 virtual platform disk optimizer. A recent product release for VMware and Hyper-V, V-locity 2.0 is the first optimizer that truly eliminates the barriers to full virtual efficiency. V-locity 2.0  employs IntelliWrite™ and InvisiTasking® technologies to both prevent a majority of fragmentation in the first place and to efficiently coordinate VM resources when defrag is running invisibly in the background. The complete white paper is located here.

RecentComments

Comment RSS

Month List

Calendar

<<  August 2019  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
2627282930311
2345678

View posts in large calendar