Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Which Processes are Using All of My System Resources?

by Gary Quan 17. July 2018 05:50

Over time as more files and applications are added to your system, you notice that performance has degraded, and you want to find out what is causing it. A good starting point is to see how the system resources are being used and which processes and/or files are using them.

Both Diskeeper® and SSDkeeper® contain a lesser known feature to assist you on this. It is called the System Monitoring Report which can show you how the CPU and I/O resources are being utilized, then digging down a bit deeper, which processes or files are using them.

Under Reports on the Main Menu, the System Monitoring Report provides you with data on the system’s CPU usage and I/O Activity.

 

The CPU Usage report takes the average CPU usage from the past 7 days, then provides a graph of the hourly usage on an average day. You can then see at which times the CPU resources are being hit the most and by how much.

Digging down some more, you can then see which processes utilized the most CPU resources.

 

The Disk I/O Activity report takes the average disk I/O activity from the past 7 days, then provides a graph of the hourly activity on an average day. You can then determine at which times the I/O activity is the highest.

Digging down some more, you can then see which processes utilized the I/O resources the most, plus what processes are causing the most split (extra) I/Os.

 

You can also see which file types have the highest I/O utilization as well as those causing the most split (extra) I/Os.  This can help indicate what files and related processes are causing this type of extra I/O activity.

 

So, if you are trying to see how your system is being used, maybe for performance issues, this report gives you a quick and easy look on how the CPU and Disk I/O resources are being used on your system and what processes and file types are using them. This along with some other Microsoft Utilities, like Task Manager and Performance Monitor can help you tune your system for optimum performance.

Dashboard Analytics 13 Metrics and Why They Matter

by Rick Cadruvi, Chief Architect 11. July 2018 09:12

 

Our latest V-locity®, Diskeeper® and SSDkeeper® products include a built-in dashboard that reports the benefits our software is providing.  There are tabs in the dashboard that allow users to view very granular data that can help them assess the impact of our software.  In the dashboard Analytics tab we display hourly data for 13 key metrics.  This document describes what those metrics are and why we chose them as key to understanding your storage performance, which directly translates to your application performance.

To start with, let’s spend a moment  trying to understand why 24-hour graphs matter.  When you, and/or your users really notice bottlenecks is generally during peak usage periods.  While some servers are truly at peak usage 24x7,  most systems, including servers, have peak I/O periods.  These almost always follow peak user activity.  

Sometimes there will be spikes also in the overnight hours when you are doing backups, virus scans, large report/data maintenance jobs, etc.  While these may not be your major concern, some of our customers find that these overlap their daytime production and therefore can easily be THE major source of concern.  For some people, making these happen before the deluge of daytime work starts, is the single biggest factor they deal with.

Regardless of what causes the peaks, it is at those peak moments when performance matters most.  When little is happening, performance rarely matters.  When a lot is happening, it is key.  The 24-hour graphs allow you to visually see the times when performance matters to you.  You can also match metrics during specific hours to see where the bottlenecks are and what technologies of ours are most effective during those hours. 

Let’s move on to the actual metrics.

 

Total I/Os Eliminated

 

Total I/Os eliminated measures the number of I/Os that would have had to go through to storage if our technologies were not eliminating them before they ever got sent to storage.  We eliminate I/Os in one of two ways.  First, via our patented IntelliMemory® technology, we satisfy I/Os from memory without the request ever going out to the storage device.  Second, several of our other technologies, such as IntelliWrite® cause the data to be stored more efficiently and densely so that when data is requested, it takes less I/Os to get the same amount of data as would otherwise be required.  The net effect is that your storage subsystems see less actual I/Os sent to them because we eliminated the need for those extra I/Os.  That allows those I/Os that do go to storage to finish faster because they aren’t waiting on the eliminated I/Os to complete.

 

IOPS

IOPS stands for I/Os Per Second.  It is the number of I/OS that you are actually requesting.  During the times with the most activity, I/Os eliminated actually causes this number to be much higher than would be possible with just your storage subsystem.  It is also a measure of the total amount of work your applications/systems are able to accomplish.

 

Data from Cache (GB)

Data from cache tells you how much of that total throughput was satisfied directly from cache.  This can be deceiving.  Our caching algorithms are aimed at eliminating a lot of small noisy I/Os that jam up the storage subsystem works.  By not having to process those, the data freeway is wide open.  This is like a freeway with accidents.  Even though the cars have moved to the side, the traffic slows dramatically.  Our cache is like accident avoidance.  It may be just a subset of the total throughput, but you process a LOT more data because you aren’t waiting for those noisy, necessary I/Os that hold your applications/systems back.

Throughput (GB Total)

Throughput is the total amount of data you process and is measured in GigaBytes.  Think of this like a freight train.  The more railcars, the more total freight being shipped.  The higher the throughput, the more work your system is doing.

 

Throughput (MB/Sec)

Throughput is a measure of the total volume of data flowing to/from your storage subsystem.  This metric measures throughput in MegaBytes per second kind of like your speedometer versus your odometer.

I/O Time Saved (seconds)

The I/O Time Saved metric tells you how much time you didn’t have to wait for I/Os to complete because of the physical I/Os we eliminated from going to storage.  This can be extremely important during your busiest times.  Because I/O requests overlap across multiple processes and threads, this time can actually be greater than elapsed clock time.  And what that means to you is that the total amount of work that gets done can actually experience a multiplier effect because systems and applications tend to multitask.  It’s like having 10 people working on sub-tasks at the same time.  The projects finish much faster than if 1 person had to do all the tasks for the project by themselves.  By allowing pieces to be done by different people and then just plugging them altogether you get more done faster.  This metric measures that effect.

 

I/O Response Time

I/O Response time is sometimes referred to as Latency.  It is how long it takes for I/Os to complete.  This is generally measured in milliseconds.  The lower the number, the better the performance.

Read/Write %

Read/Write % is the percentage of Reads to Writes.  If it is at 75%, 3 out of every 4 I/Os are Reads to each Write.  If it were 25%, then it would signify that there are 3 Writes per each Read.

 

Read I/Os Eliminated

This metric tells you how many Read I/Os we eliminated.  If your Read to Write ratio is very high, this may be one of the most important metrics for you.  However, remember that eliminating Writes means that Reads that do go to storage do NOT have to wait for those writes we eliminated to complete.  That means they finish faster.  Of course, the same is true that Reads eliminated improves overall Read performance.

% Read I/Os Eliminated

 

% Read I/Os Eliminated tells you what percentage of your overall Reads were eliminated from having to be processed at all by your storage subsystem.

 

Write I/Os Eliminated

This metric tells you how many Write I/Os we eliminated.  This is due to our technologies that improve the efficiency and density of data being stored by the Windows NTFS file system.

% Write I/Os Eliminated 

 

% Write I/Os Eliminated tells you what percentage of your overall Writes were eliminated from having to be processed at all by your storage subsystem.

Fragments Prevented and Eliminated

Fragments Prevented and Eliminated gives you an idea of how we are causing data to be stored more efficiently and dense, thus allowing Windows to process the same amount of data with far fewer actual I/Os.

If you have our latest versions of V-locity, Diskeeper or SSDkeeper installed, you can open the Dashboard now and select the Analytics tab and see all of these metrics.

If you don’t have the latest version installed and you have a current maintenance agreement, login to your online account to download and install the software.

Not a customer yet and want to checkout these dashboard metrics, download a free trial at www.condusiv.com/try.

How to Improve Application Performance by Decreasing Disk Latency like an IT Engineer

by Spencer Allingham 13. June 2018 06:49

You might be responsible for a busy SQL server, for example, or a Web Server; perhaps a busy file and print server, the Finance Department's systems, documentation management, CRM, BI, or something else entirely.

Now, think about WHY these are the workloads that you care about the most?

 

Were YOU responsible for installing the application running the workload for your company? Is the workload being run business critical, or considered TOO BIG TO FAIL?

Or is it simply because users, or even worse, customers, complain about performance?

 

If the last question made you wince, because you know that YOU are responsible for some of the workloads running in your organisation that would benefit from additional performance, please read on. This article is just for you, even if you don't consider yourself a "Techie".

Before we get started, you should know that there are many variables that can affect the performance of the applications that you care about the most. The slowest, most restrictive of these is referred to as the "Bottleneck". Think of water being poured from a bottle. The water can only flow as fast as the neck of the bottle, the 'slowest' part of the bottle.

Don't worry though, in a computer the bottleneck will pretty much always fit into one of the following categories:

•           CPU

•           DISK

•           MEMORY

•           NETWORK

The good news is that if you're running Windows, it is usually very easy to find out which one the bottleneck is in, and here is how to do it (like an IT Engineer):

 •          Open Resource Monitor by clicking the Start menu, typing "resource monitor", and pressing Enter. Microsoft includes this as part of the Windows operating system and it is already installed.

 •          Do you see the graphs in the right-hand pane? When your computer is running at peak load, or users are complaining about performance, which of the graphs are 'maxing out'?

This is a great indicator of where your workload's bottleneck is to be found.         

 

SO, now you have identified the slowest part of your 'compute environment' (continue reading for more details), what can you do to improve it?

The traditional approach to solving computer performance issues has been to throw hardware at the solution. This could be treating yourself to a new laptop, or putting more RAM into your workstation, or on the more extreme end, buying new servers or expensive storage solutions.

BUT, how do you know when it is appropriate to spend money on new or additional hardware, and when it isn't. Well the answer is; 'when you can get the performance that you need', with the existing hardware infrastructure that you have already bought and paid for. You wouldn't replace your car, just because it needed a service, would you?

Let's take disk speed as an example.  Let’s take a look at the response time column in Resource Monitor. Make sure you open the monitor to full screen or large enough to see the data.  Then open the Disk Activity section so you can see the Response Time column.  Do it now on the computer you're using to read this. (You didn't close Resource Monitor yet, did you?) This is showing the Disk Response Time, or put another way, how long is the storage taking to read and write data? Of course, slower disk speed = slower performance, but what is considered good disk speed and bad?

To answer that question, I will refer to a great blog post by Scott Lowe, that you can read here...

https://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-to-monitor-storage-performance/

In it, the author perfectly describes what to expect from faster and slower Disk Response Times:

"Response Time (ms). Disk response time in milliseconds. For this metric, a lower number is definitely better; in general, anything less than 10 ms is considered good performance. If you occasionally go beyond 10 ms, you should be okay, but if the system is consistently waiting more than 20 ms for response from the storage, then you may have a problem that needs attention, and it's likely that users will notice performance degradation. At 50 ms and greater, the problem is serious."

Hopefully when you checked on your computer, the Disk Response Time is below 20 milliseconds. BUT, what about those other workloads that you were thinking about earlier. What's the Disk Response Times on that busy SQL server, the CRM or BI platform, or those Windows servers that the users complain about?

If the Disk Response Times are often higher than 20 milliseconds, and you need to improve the performance, then it's choice time and there are basically two options:

           In my opinion as an IT Engineer, the most sensible option is to use storage workload reduction software like Diskeeper for physical Windows computers, or V-locity for virtualised Windows computers. These will reduce Disk Storage Times by allowing a good percentage of the data that your applications need to read, to come from a RAM cache, rather than slower disk storage. This works because RAM is much faster than the media in your disk storage. Best of all, the only thing you need to do to try it, is download a free copy of the 30 day trial. You don't even have to reboot the computer; just check and see if it is able to bring the Disk Response Times down for the workloads that you care about the most.

           If you have tried the Diskeeper or V-locity software, and you STILL need faster disk access, then, I'm afraid, it's time to start getting quotations for new hardware. It does make sense though, to take a couple of minutes to install Diskeeper or V-locity first, to see if this step can be avoided. The software solution to remove storage inefficiencies is typically a much more cost-effective solution than having to buy hardware!

Visit www.condusiv.com/try to download Diskeeper and V-locity now, for your free trial.

 

The Revolution of Our Technology

by Rick Cadruvi, Chief Architect 18. October 2017 12:38

I chose to use the word “Revolution” instead of “Evolution” because, with all due modesty, our patented technology has been more a series of leaps to stay ahead of performance-crushing bottlenecks. After all, our company purpose as stated by our Founder, Craig Jensen, is:

“The purpose of our company is to provide computer technology that enormously increases

the production and income of an area.”

We have always been about improving your production. We know your systems are not about having really cool hardware but rather about maximizing your organization’s production. Our passion has been about eliminating the stops, slows and stalls to your application performance and instead, to jack up that performance and give you headroom for expansion. Now, most of you know us by our reputation for Diskeeper®. What you probably don’t know about us is our leadership in system performance software.

We’ve been at this for 35 years with a laser focus. As an example, for years hard drives were the common storage technology and they were slow and limited in size, so we invented numerous File System Optimization technologies such as Defragmentation, I-FAAST®1 and Directory Consolidation to remove the barriers to getting at data quickly. As drive sizes grew, we added new technologies and jettisoned those that no longer gave bang for the buck. Technologies like InvisiTasking® were invented to help maximize overall system performance, while removing bottlenecks.

As SSDs began to emerge, we worked with several OEMs to take advantage of SSDs to dramatically reduce data access times as well as reducing the time it took to boot systems and resume from hibernate. We created technologies to improve SSD longevity and even worked with manufacturers on hybrid drives, providing hinting information, so their drive performance and endurance would be world class.

As storage arrays were emerging we created technologies to allow them to better utilize storage resources and pre-stage space for future use. We also created technologies targeting performance issues related to file system inefficiencies without negatively affecting storage array technologies like snapshots.

When virtualization was emerging, we could see the coming VM resource contention issues that would materialize. We used that insight to create file system optimization technologies to deal with those issues before anyone coined the phrase “I/O Blender Effect”.

We have been doing caching for a very long time2. We have always targeted removal of the I/Os that get in your applications path to data along with satisfying the data from cache that delivers performance improvements of 50-300% or more. Our goal was not caching your application specific data, but rather to make sure your application could access its data much faster. That’s why our unique caching technology has been used by leading OEMs.

Our RAM-based caching solutions include dynamic memory allocation schemes to use resources that would otherwise be idle to maximize overall system performance. When you need those resources, we give them back. When they are idle, we make use of them without your having to adjust anything for the best achievable performance. “Set It and Forget It®” is our trademark for good reason.

We know that staying ahead of the problems you face now, with a clear understanding of what will limit your production in 3 to 5 years, is the best way we can realize our company purpose and help you maximize your production and thus your profitability. We take seriously having a clear vision of where your problems are now and where they will be in the future. As new hardware and software technologies roll out, we will be there removing the new barriers to your performance then, just as we do now.

1. I-FAAST stands for Intelligent File Access Acceleration Sequencing Technology, a technology designed to take advantage of different performing regions on storage to allow your hottest data to be retrieved in the fastest time.

2. If I can personally brag, I’ve created numerous caching solutions over a period of 40 years.

I Have Backups and Snapshots, So Why Do Condusiv Customers Use Undelete®?

by James Fields, Director of Customer Support 10. May 2017 09:57

Backups and snapshots are used by enterprises to recover data sets in the event of system failure. But how about individual files on file servers? Often times they are accidentally deleted by users or overwritten.

Backups and snapshots can still be used to retrieve those files but that can be akin to finding a needle in a haystack and laborious to recover. Which backup contains the most recent version? Was the file created or modified before the last backup or snapshot took place? In that case, the backup or snapshot isn’t of any help.

Condusiv customers use Undelete as a first line of defense for data protection on file servers to keep administrators from digging through backups and snapshots for individual files or folders. If any user accidentally deletes a file over a network share, it goes into Undelete’s recycle bin. This ensures real-time protection of all files on a file server that can be quickly and immediately recovered. Many organizations use Undelete for their HelpDesk team to recover individual files instead of tasking IT staff with the tedious task of accessing backups for a single file.

Moreover, Undelete keeps prior versions of MS Office documents, so if your CEO accidentally overwrites his PowerPoint presentation, you can always recover prior versions of saved files that share the same file name. One of the features admins like most about Undelete is the ability to see who deleted a file, when it was deleted, and who created the file.  This is especially useful if you are concerned with the possible nefarious activities of staff. 

Last month, 53 Undelete customers participated in an Undelete product survey and told us what they like the most, and here are some of their answers:

“Ease of restoring files deleted from network locations without having to use backups.”

“We have backups on daily basis, but nothing that keeps the deleted network files available and reported on who deleted. Undelete covers this issue.”

“We needed a product that provided "Recycle Bin" functionality for network shares.  Undelete Server goes one step further and even tracks revisions. Great product.”

“Sometimes, human errors occur and files get overwritten or deleted, which means losing several hours of work since the last regular backup. We need to be able to instantly recover deleted files - this is not possible with scheduled backups or VSS.”

“Versioning control on modified or overwritten files.”

“Much quicker and simpler than reverting to any BACKUP/RESTORE software. Plus, our HelpDesk can use it.”

“On a number of occasions, users will "lose" or delete files. It is simpler to use Undelete than scour through backups.”

“Close the time gap between an incident and the last regular backup.”

“HelpDesk needed this tool to offload menial requests from IT staff to dig through backups for one file”

 

James Fields | DIRECTOR OF CUSTOMER SUPPORT

Tags:

Data Protection | File Recovery | General | Windows 7 | Windows 8 | Windows Server 2012

RecentComments

Comment RSS

Month List

Calendar

<<  May 2019  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar