Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

A Deep Dive Into The I/O Performance Dashboard

by Howard Butler 2. August 2018 08:36

While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity.  The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams. 

By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days.  The data displayed will automatically be updated to reflect the time frame selected.

 

The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).

While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure. 

IOPS is the number of I/Os completed in 1 second of time.  This is a measurement of both read and write I/O operations.  MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system.  Taken together they represent speed and throughput efficiency.  One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level.  This is where the rubber meets the road.  Each I/O that passes through the Windows storage driver has a start and completion time stamp.  The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing.  Measurements at the storage device do not account for network, host, and hypervisor congestion.  Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency.  In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes.  This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.

The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache. 

 

A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.

Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache.  The purpose of our patented read caching technology is twofold.  Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache.  So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms.  Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix.  Eliminating I/O traffic from going to storage is what it’s all about.  The fewer I/Os to storage, the faster and more data your applications will be able to access.

In addition, we show the average, minimum, and maximum values for free memory used by the cache.  For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free).  The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit.  The memory values will be displayed in red if the Total Memory is less than 3GB.

Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.

Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.

It should be noted the IntelliMemory read cache is dynamic and self-learning.  This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache.  IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use.  As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage.  There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache.  The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic. 

So, there you have it.  I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.

You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try

Solving the IO Blender Effect with Software-Based Caching

by Spencer Allingham 5. July 2018 07:30

First, let me explain exactly what the IO Blender Effect is, and why it causes a problem in virtualized environments such as those from VMware or Microsoft’s Hyper-V.



This is typically what storage IO traffic would look like when everything is working well. You have the least number of storage IO packets, each carrying a large payload of data down to the storage. Because the data is arriving in large chunks at a time, the storage controller has the opportunity to create large stripes across its media, using the least number of storage-level operations before being able to acknowledge that the write has been successful.



Unfortunately, all too often the Windows Write Driver is forced to split data that it’s writing into many more, much smaller IO packets. These split IO situations cause data to be transferred far less efficiently, and this adds overhead to each write and subsequent read. Now that the storage controller is only receiving data in much smaller chunks at a time, it can only create much smaller stripes across its media, meaning many more storage operations are required to process each gigabyte of storage IO traffic.


This is not only true when writing data, but also if you need to read that data back at some later time.

But what does this really mean in real-world terms?

It means that an average gigabyte of storage IO traffic that should take perhaps 2,000 or 3,000 storage IO packets to complete, is now taking 30,000, or 40,000 storage IO packets instead. The data transfer has been split into many more, much smaller, fractured IO packets. Each storage IO operation that has to be generated takes a measurable amount of time and system resource to process, and so this is bad for performance! It will cause your workloads to run slower than they should, and this will worsen over time unless you perform some time and resource-costly maintenance.

So, what about the IO Blender Effect?

Well, the IO Blender Effect can amplify the performance penalty (or Windows IO Performance Tax) in a virtualized environment. Here’s how it works…

 

As the small, fractured IO traffic from several virtual machines passes through the physical host hypervisor (Hyper-V server or VMware ESX server), the hypervisor acts like a blender. It mixes these IO streams, which causes a randomization of the storage IO packets, before sending out what is now a chaotic mess of small, fractured and now very random IO streams out to the storage controller.

It doesn’t matter what type of storage you have on the back-end. It could be direct attached disks in the physical host machine, or a Storage Area Network (SAN), this type of storage IO profile couldn’t be less storage-friendly.

The storage is now only receiving data in small chunks at a time, and won’t understand the relationship between the packets, so it now only has the opportunity to create very small stripes across its media, and that unfortunately means many more storage operations are required before it can send an acknowledgement of the data transfer back up to the Windows operating system that originated it.

How can RAM caching alleviate the problem?

 

Firstly, to be truly effective the RAM caching needs to be done at the Windows operating system layer. This provides the shortest IO path for read IO requests that can be satisfied from server-side RAM, provisioned to each virtual machine. By satisfying as many “Hot Reads” from RAM as possible, you now have a situation where not only are those read requests being satisfied faster, but those requests are now not having to go out to storage. That means less storage IO packets for the hypervisor to blend.

Furthermore, the V-locity® caching software from Condusiv Technologies also employs a patented technology called IntelliWrite®. This intelligently helps the Windows Write Driver make better choices when writing data out to disk, which avoids many of the split IO situations that would then be made worse by the IO Blender Effect. You now get back to that ideal situation of healthy IO; large, sequential writes and reads.

Is RAM caching a disruptive solution?

 

No! Not at all, if done properly.

Condusiv’s V-locity software for virtualised environments is completely non-disruptive to live, running workloads such as SQL Servers, Microsoft Dynamics, Business Information (BI) solutions such as IBM Cognos, or other important workloads such as SAP, Oracle and the such.

In fact, all you need to do to test this for yourself is download a free trialware copy from:

www.condusiv.com/try

Just install it! There are no reboots required, and it will start working in just a couple of minutes. If you decide that it isn’t for you, then uninstall it just as easily. No reboots, no disruption!


The Tsunami of Data is Swamping IT

by Jim D’Arezzo, CEO 29. September 2017 09:53

There’s a storm brewing and chances are, it’s going to hit your data center or cloud site.

It’s the tsunami of data that is washing over the IT world and it is just going to get worse. That’s because there is an insatiable demand for data: big data, ERP, CRM, BI, EHR, and of course all of that social media and video that is flooding global organizations like a bursting dam.

I’ll admit it, I’m a dinosaur. I got started in the IT industry in the late 1970’s at IBM. So I can rightly say that I’ve seen a lot in the past 40 years. When I started, it was the heyday of the mainframe era. Data processing was still practically a priesthood back then. It wasn’t for everyone. Bill Gates would not proclaim “Information at your fingertips” for at least another dozen years.

Clearly, we’ve come a long way in four decades. The three driving factors – exponentially increased compute power, nearly unlimited storage, and the internet have delivered computing nirvana. Or so it would seem. But with abundance comes challenges. Frankly, we are now awash in data. The tsunami of available information is swamping IT environments worldwide. And even with huge advances in the three driving factors (compute, storage and the internet) IT is experiencing performance bottlenecks on an increasing basis. Recently, we conducted a survey of 1400 IT professionals and fully 27% said they are experiencing performance problems that are causing user complaints and slowdowns.

As usual, our industry is innovating to keep up. Cloud based computing that takes advantage of huge data centers as well as innovation in storage, compute power and connectivity has kept up with much of the demand. However, most of these improvements focus on hardware – all flash storage arrays; 64-core servers; hyper-convergence. But there’s one other performance innovation that is often overlooked: Software.

Software innovation that takes advantage of existing and new hardware capabilities can significantly increase performance without the cost of additional hardware. Even with an all flash storage back-end, software like ours can significantly increase performance and extend the life of the hardware. The cost-benefit can be tremendous. Imagine getting a 50% boost in performance (or even 2-3X) without buying a single additional piece of hardware. CEOs and CFOs love that kind of benefit.  I know, because that is what we hear from our customers every day.

So, before you get swamped by the tsunami of data that’s lapping at your data center, consider a software solution to the problem. It can help you stay afloat without having to buy a new yacht!

 

Tags: , , , , , , , ,

Application Performance | Big Data | Cloud

The Gartner Cool Vendor Report in Storage Technologies: Vanity or Value

by Robert Woolery 22. April 2014 08:58

We all like lists that rank who is cool, best in class or top score in a buyer’s guide. Every year, Gartner releases their prized "Cool Vendor" selection. But is it just vanity for the vendor selected or is there actual, tangible value to the prospective customer that makes you care?

We believe one significant difference about the Cool Vendor Report compared to other reports is Gartner does a deep-dive examination of compelling vendors across the technology landscape, then upon selecting their "cool vendors" for the year, they reveal their analysis, why the vendor is cool, challenges the vendor faces and who should care.

Of all the technology companies on the landscape, Gartner chose to highlight four this year in the area of storage technologies, providing research into their innovative products and/or services.

When we were brainstorming our flagship product V-locity, we spoke to hundreds customers and we heard a common theme – performance problems in virtual environments whereby users were buying lots of hardware to solve an inherent software problem per the "I/O blender" effect.

As we dug in, a clearer picture emerged. We've become conditioned to medicating performance problems with hardware. And why not? In the past, performance gains were growing by 4X to 8X every ten years. Hardware was cheap. The price performance continued to improve every two years. And inertia, doing business as usual was low risk – buy more hardware because we’ve always done it that way and the financial folks understand the approach.

When we evangelize the problem of I/O growing faster than hardware could cost-effectively keep up and the need for a software only approach to easily cure it, we found the problem and solution resonated with many customers – webinar attendance ranged from 400 to 2,000 attendees. And while we are fast approaching 2,000 corporate installations, there are still customers wondering why they have not heard of the I/O problem we solve and our innovative way to solve it. They want some proof.

This is where the Gartner Cool Vendor report is helpful to IT users and their organizations. The reports help focus and reduce the learning curve on the relevant problems in IT, the innovative companies that warrant further investigation and highlight interesting new products and services that address issues in emerging trends.

The Cool Vendor Report can be read in the time it takes to have a cup of coffee. Not surprisingly, the Cool Vendor Reports are one of two top reports Gartner clients download.

Now for our vanity plug, Condusiv is listed in the Cool Vendor Report titled "Cool Vendors in Storage Technologies, 2014." This is usually only available to Gartner clients, but we paid for distribution rights so you could read it for free. Download Gartner's Cool Vendors in Storage Technologies Report

When Dave Cappuccio of Gartner Speaks, People Listen

by Robert Woolery 10. April 2014 01:32

As others entered the stage, there was perfunctory applause. However, when Dave entered a roar of applause erupted, and the event ground to a halt for several minutes.

Sitting in the audience at Gartner Symposium 2013 over the next several days I saw this reaction several times.  When Dave Cappuccio, Vice President and Chief of Research at Gartner speaks, people listen. 

He made the case for the “Infinite Data Center” concept – increasing data center performance and productivity without growing beyond the current facility.  He guided the audience through a strategy that allows IT organizations to grow revenue without increasing the IT budget. 

The strategies and concepts were not only thought provoking but inspired and challenged the audience to think about their roles differently.  How IT could provide leadership to their organizations dealing with a NEXUS of forces – convergence of social, mobility, cloud and information patterns that drive new business scenarios – that are innovative and disruptive on their own but together are revolutionizing business and society, disrupting old business models and creating new leaders.  

It was clear he had insights and answers to many of the issues and he could help organizations develop strategies that would address questions like “What are the IT inhibitors to transforming the business into a growth engine?”  “What strategies can a CIO employ to overcome the challenge of transforming the business without increasing the IT budget?” and “How can IT leaders explain this value to the CEO or CFO in business terms?”

From this inspiration, this Gartner webcast was born.  Dave tackles these issues and provides a construct that came to be used to answer these questions for each customer’s unique circumstances as our featured guest.  Moreover, Dave guides IT leaders in how to manage reactions from thought leadership that pushes organizations out of their traditional comfort zones so they can transform their business into a growth engine. 

Take a look and let us know your thoughts.
http://event.on24.com/r.htm?e=767143&s=1&k=E7EA80EB703744884F03E72B1E91D455

RecentComments

Comment RSS

Month List

Calendar

<<  June 2019  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar