Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

A Deep Dive Into The I/O Performance Dashboard

by Howard Butler 2. August 2018 08:36

While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity.  The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams. 

By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days.  The data displayed will automatically be updated to reflect the time frame selected.

 

The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).

While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure. 

IOPS is the number of I/Os completed in 1 second of time.  This is a measurement of both read and write I/O operations.  MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system.  Taken together they represent speed and throughput efficiency.  One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level.  This is where the rubber meets the road.  Each I/O that passes through the Windows storage driver has a start and completion time stamp.  The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing.  Measurements at the storage device do not account for network, host, and hypervisor congestion.  Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency.  In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes.  This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.

The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache. 

 

A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.

Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache.  The purpose of our patented read caching technology is twofold.  Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache.  So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms.  Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix.  Eliminating I/O traffic from going to storage is what it’s all about.  The fewer I/Os to storage, the faster and more data your applications will be able to access.

In addition, we show the average, minimum, and maximum values for free memory used by the cache.  For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free).  The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit.  The memory values will be displayed in red if the Total Memory is less than 3GB.

Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.

Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.

It should be noted the IntelliMemory read cache is dynamic and self-learning.  This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache.  IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use.  As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage.  There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache.  The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic. 

So, there you have it.  I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.

You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try

Which Processes are Using All of My System Resources?

by Gary Quan 17. July 2018 05:50

Over time as more files and applications are added to your system, you notice that performance has degraded, and you want to find out what is causing it. A good starting point is to see how the system resources are being used and which processes and/or files are using them.

Both Diskeeper® and SSDkeeper® contain a lesser known feature to assist you on this. It is called the System Monitoring Report which can show you how the CPU and I/O resources are being utilized, then digging down a bit deeper, which processes or files are using them.

Under Reports on the Main Menu, the System Monitoring Report provides you with data on the system’s CPU usage and I/O Activity.

 

The CPU Usage report takes the average CPU usage from the past 7 days, then provides a graph of the hourly usage on an average day. You can then see at which times the CPU resources are being hit the most and by how much.

Digging down some more, you can then see which processes utilized the most CPU resources.

 

The Disk I/O Activity report takes the average disk I/O activity from the past 7 days, then provides a graph of the hourly activity on an average day. You can then determine at which times the I/O activity is the highest.

Digging down some more, you can then see which processes utilized the I/O resources the most, plus what processes are causing the most split (extra) I/Os.

 

You can also see which file types have the highest I/O utilization as well as those causing the most split (extra) I/Os.  This can help indicate what files and related processes are causing this type of extra I/O activity.

 

So, if you are trying to see how your system is being used, maybe for performance issues, this report gives you a quick and easy look on how the CPU and Disk I/O resources are being used on your system and what processes and file types are using them. This along with some other Microsoft Utilities, like Task Manager and Performance Monitor can help you tune your system for optimum performance.

How to Improve Application Performance by Decreasing Disk Latency like an IT Engineer

by Spencer Allingham 13. June 2018 06:49

You might be responsible for a busy SQL server, for example, or a Web Server; perhaps a busy file and print server, the Finance Department's systems, documentation management, CRM, BI, or something else entirely.

Now, think about WHY these are the workloads that you care about the most?

 

Were YOU responsible for installing the application running the workload for your company? Is the workload being run business critical, or considered TOO BIG TO FAIL?

Or is it simply because users, or even worse, customers, complain about performance?

 

If the last question made you wince, because you know that YOU are responsible for some of the workloads running in your organisation that would benefit from additional performance, please read on. This article is just for you, even if you don't consider yourself a "Techie".

Before we get started, you should know that there are many variables that can affect the performance of the applications that you care about the most. The slowest, most restrictive of these is referred to as the "Bottleneck". Think of water being poured from a bottle. The water can only flow as fast as the neck of the bottle, the 'slowest' part of the bottle.

Don't worry though, in a computer the bottleneck will pretty much always fit into one of the following categories:

•           CPU

•           DISK

•           MEMORY

•           NETWORK

The good news is that if you're running Windows, it is usually very easy to find out which one the bottleneck is in, and here is how to do it (like an IT Engineer):

 •          Open Resource Monitor by clicking the Start menu, typing "resource monitor", and pressing Enter. Microsoft includes this as part of the Windows operating system and it is already installed.

 •          Do you see the graphs in the right-hand pane? When your computer is running at peak load, or users are complaining about performance, which of the graphs are 'maxing out'?

This is a great indicator of where your workload's bottleneck is to be found.         

 

SO, now you have identified the slowest part of your 'compute environment' (continue reading for more details), what can you do to improve it?

The traditional approach to solving computer performance issues has been to throw hardware at the solution. This could be treating yourself to a new laptop, or putting more RAM into your workstation, or on the more extreme end, buying new servers or expensive storage solutions.

BUT, how do you know when it is appropriate to spend money on new or additional hardware, and when it isn't. Well the answer is; 'when you can get the performance that you need', with the existing hardware infrastructure that you have already bought and paid for. You wouldn't replace your car, just because it needed a service, would you?

Let's take disk speed as an example.  Let’s take a look at the response time column in Resource Monitor. Make sure you open the monitor to full screen or large enough to see the data.  Then open the Disk Activity section so you can see the Response Time column.  Do it now on the computer you're using to read this. (You didn't close Resource Monitor yet, did you?) This is showing the Disk Response Time, or put another way, how long is the storage taking to read and write data? Of course, slower disk speed = slower performance, but what is considered good disk speed and bad?

To answer that question, I will refer to a great blog post by Scott Lowe, that you can read here...

https://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-to-monitor-storage-performance/

In it, the author perfectly describes what to expect from faster and slower Disk Response Times:

"Response Time (ms). Disk response time in milliseconds. For this metric, a lower number is definitely better; in general, anything less than 10 ms is considered good performance. If you occasionally go beyond 10 ms, you should be okay, but if the system is consistently waiting more than 20 ms for response from the storage, then you may have a problem that needs attention, and it's likely that users will notice performance degradation. At 50 ms and greater, the problem is serious."

Hopefully when you checked on your computer, the Disk Response Time is below 20 milliseconds. BUT, what about those other workloads that you were thinking about earlier. What's the Disk Response Times on that busy SQL server, the CRM or BI platform, or those Windows servers that the users complain about?

If the Disk Response Times are often higher than 20 milliseconds, and you need to improve the performance, then it's choice time and there are basically two options:

           In my opinion as an IT Engineer, the most sensible option is to use storage workload reduction software like Diskeeper for physical Windows computers, or V-locity for virtualised Windows computers. These will reduce Disk Storage Times by allowing a good percentage of the data that your applications need to read, to come from a RAM cache, rather than slower disk storage. This works because RAM is much faster than the media in your disk storage. Best of all, the only thing you need to do to try it, is download a free copy of the 30 day trial. You don't even have to reboot the computer; just check and see if it is able to bring the Disk Response Times down for the workloads that you care about the most.

           If you have tried the Diskeeper or V-locity software, and you STILL need faster disk access, then, I'm afraid, it's time to start getting quotations for new hardware. It does make sense though, to take a couple of minutes to install Diskeeper or V-locity first, to see if this step can be avoided. The software solution to remove storage inefficiencies is typically a much more cost-effective solution than having to buy hardware!

Visit www.condusiv.com/try to download Diskeeper and V-locity now, for your free trial.

 

The Inside Story of Condusiv’s “No Reboot” Quest

by Rick Cadruvi, Chief Architect 17. April 2018 04:57

In a world of 24/7 uptime and rare reboot windows, one of our biggest challenges as a company has simply been getting our own customers upgraded to the latest version of our I/O reduction software.

In the last year, we have done dashboard review sessions with a substantial number of customers to demonstrate the power of our latest versions to hybrid and all-flash arrays, hyperconverged systems, Azure/AWS, local SSDs, and more. However, many remain undone simply because customers can’t find the time for reboot windows to upgrade to the latest versions with the most powerful engines and new benefits dashboard. This has been particularly challenging for customers with hundreds to thousands of servers.

Even though we own the trademark term, “Set It and Forget It®,” there was always one aspect that wasn’t, and that’s the fact that it required a reboot to install or upgrade.

Herein lies the problem – important components of our software sit at the storage driver level. At least to the best of our knowledge, all other software vendors who sit at that layer also require a reboot to install or upgrade. So, consider our engineering challenge to take on a project most people wouldn’t know was even solvable.

Let’s start with an explanation as to why this barrier existed. Our software contains several filter drivers that allow us to implement leading edge performance enhancing technologies.  Some of them act at the Windows File System level. Windows has long provided a Filter Manager that allows developers to create File System and Network filter drivers that can be loaded and unloaded without requiring a reboot.  You will quickly recognize that Anti-Malware and Data Backup/Recovery software tends to be the principle targets for this Filter Manager. There are also products such as data encryption that benefit from the Windows Filter Manager. And, as it turns out, we benefit because some of our filter drivers run above the File System.

However, sometimes a software product needs to be closer to the physical hardware itself. This allows a much broader view of what is going on with the actual I/O to the physical device subsystem. There are quite a few software products that need this bigger view. It turns out that we do also.  One of the reasons, is to allow our patented IntelliMemory® caching software to eliminate a huge amount of noisy I/O that creates substantial, yet preventable, bottlenecks to your application. This is I/O that your application wouldn’t even recognize as problematic to its performance, nor would you. Because we have a global view, we can eliminate a large percentage of I/Os from having to go to storage, while using very limited system resources. We also have other technologies that benefit from our telemetry disk filter that helps us see a more global picture of storage performance and what is actually causing bottlenecks. This allows us to focus our efforts on the true causes of those bottlenecks, giving our customers the greatest bang for their buck.  Because we collect excellent empirical data about what is causing the bottlenecks, we can apply very limited and targeted system resources to deliver very significant storage performance increases. Keep in mind, the limited CPU cycles we use operate at lowest priority and we only use resources that are otherwise idle, so the benefits of our engines are completely non-intrusive to overall server performance.

Why does the above matter? Well, the Microsoft Filter Manager doesn’t provide support for most driver stacks and this includes the parts of the storage driver stack below the File System. That means that our disk filter drivers couldn’t actually start providing their benefits upon initial install until after a reboot. If we add new functionality to provide even greater storage performance via a change to one of our disk filter drivers, a reboot was required after an update before the new functionality could be brought to bear.

Until now we just lived with the restrictions. We didn’t live with it because we couldn’t create a solution, but because we anticipated that the frequency of Windows updates, especially security-based updates, would start to increase the frequency of server reboot requirements and the problem would, for all intents and purposes, become manageable. Alas, our hopes and dreams in this area failed to materialize. 

We’ve been doing Windows system and especially kernel software development for decades. I just attended Plugfest 30 for file system filter driver developers.  This is a Microsoft event to ensure high-quality standards for products with filter drivers like ours. We were also at the first Plugfest nearly two decades ago. In addition, we also wrote the Windows NTFS file system component to allow safe, live file defragmentation for Windows NT dating back to the Windows NT 3.51 release.  That by itself is an interesting story, but I’ll leave that for another time.

Anyway, we finally realized that our crystal ball prediction about an increase in the frequency of Windows Server reboots due to Windows Update cycles (patch Tuesday?) was a little less clear than we had hoped. Accepting that this problem wasn’t going away, we set out to create our own Filter Manager to provide a mechanism that allowed filter drivers on stacks not supported by the Microsoft Filter Manager to be inserted and removed without the reboot requirement. This was something we’ve been considering, talked about with other software vendors in a similar situation, and even prototyped before. The time had finally come where we needed to facilitate our customers in getting the significant increased performance from our software immediately instead of waiting for reboot opportunities.

We took our decades of experience and knowledge of Windows Operating System internals and experience developing Kernel software and aimed it at giving our customers the relief from this limitation. The result is in our latest release of V-locity® 7.0, Diskeeper® 18, and SSDkeeper™ 2.0. 

We’d love to hear your stories about how this revolutionary enablement technology has made a difference for you and your organization.

Tags:

Diskeeper | V-Locity

Condusiv Smashes the I/O Performance Gap with New V-locity 7.0, Diskeeper 18, and SSDkeeper 2.0

by Brian Morin 6. April 2018 08:37

Condusiv is pleased to announce the release of V-locity® 7.0, Diskeeper® 18, and SSDkeeper 2.0 that smash the I/O Performance Gap on Windows servers and PCs as growing volumes of data continue to outpace the ability of underlying server and storage hardware to meet performance SLAs on mission critical workloads like MS-SQL.

The new 2018 editions of V-locity, Diskeeper, and SSDkeeper come with “no reboot” capabilities and enhanced reporting that offers a single pane view of all systems to show the exact benefit of I/O reduction software to each system in terms of number of noisy I/Os eliminated, percentage of read and write traffic offloaded from storage, and, most importantly, how much time is saved on each system as a result. It is also now easier than ever to quickly identify systems underperforming from a caching standpoint that could use more memory.

When a minimum of 30-40% I/O traffic from any Windows server is completely unnecessary, nothing but mere noise chewing up IOPS and throughput, it needs to be easy to see the exact levels of inefficiency on individual systems and what it means in terms of I/O reduction and “time saved” when Condusiv software is deployed to eliminate those inefficiencies. Since many customers choose to add a little more memory on key systems like MS-SQL to get even more from the software, it is now clearly evident what 50% or more reduction in I/O traffic actually means.

Our recent 4th annual I/O Performance Survey (no Condusiv customers included) found that MS-SQL performance problems are at their worst level in 4 years despite heavy investments in hardware infrastructure. 28% of mid-sized and large enterprises receive regular complaints from users regarding sluggish SQL-based applications. This is simply due to the growth of I/O outpacing the hardware stack’s ability to keep up. This is why it is more important than ever to consider I/O reduction software solutions that guarantee to solve performance issues instead of reactively throwing expensive new servers or storage at the problem.

Not only are the latest versions of V-locity, Diskeeper, and SSDkeeper easier to deploy and manage with “no reboot” capabilities, but reporting has been enhanced to enable administrators to quickly see the full value being provided to each system along with memory tuning recommendations for even more benefit.

A single pane view lists out all systems with associated workload data, memory data, and benefit data from I/O reduction software and lists systems as red, yellow, and green according to caching effectiveness to help administers quickly identify and prioritize systems that could use a little more memory to achieve a 50% or more reduction in I/O to storage.

Regarding the new “no reboot” capabilities, this is something that the engineering team has been attempting to crack for some time.  Per Rick Cadruvi, SVP, Engineering, “All storage filter drivers require a reboot, which is problematic for admins who manage software across thousands of servers. However, due to our extensive knowledge of Windows Kernel internals dating back to Windows NT 3.51, we were able to find a way to properly synchronize and handle the load/unload sequences of our driver transparently to other drivers in the storage stack so as not to require a reboot when deploying or updating Condusiv software.” 

Fore more on Condusiv’s quest for “no reboot” capabilities, see this blog by Rick Cadruvi, SVP Engineering: The Inside Story of Condusiv's No Reboot Quest 

This means that customers who are currently on V-locity 5.3 and higher or Diskeeper 15 and higher are able to upgrade to the latest version without a reboot. Customers on older versions will have to uninstall, reboot, then install the new version.

"As much as Condusiv I/O reduction software has been a real benefit to our applications running across 2,500+ Windows servers, we are happy to see a no reboot version of the software released so it is now truly "Set It & Forget It"®. My team is happy they no longer have to wrestle down a reboot window for hundreds of servers in order to update or deploy Condusiv software," said Blake W. Smith, MSME, System Director, Enterprise Infrastructure, CHRISTUS Health.

 

RecentComments

Comment RSS

Month List

Calendar

<<  January 2019  >>
MoTuWeThFrSaSu
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar