Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Causes and Solutions for Latency

by Kim Amezcua 19. December 2019 04:14

Sometimes the slowdown of a Windows server occurs because the device or its operating system is outdated. Other times, the slowdown is due to physical constraints on the retrieval, processing, or transmitting of data. There are other causes as we will cover. In any case, the delay between when a command is made and a response is received is referred to as "latency."

Latency is a measure of time. For example, the latency of a command might be 0.02 seconds. To humans, this seems extraordinarily fast. However, computer processors can execute billions of instructions per second. This means that latency of a few millionths of a second can cause visible delays in the operation of a computer or server.

To figure out how to improve latency, you must identify the source of any latency issues. There are many possible sources of latency and, for each one, there are high latency fixes. Here are two possible causes of latency as well as a brief explanation for how to improve latency. In this case, I/O latency where the computer process is waiting for the I/O to complete, so it can process the data of that I/O, is a   waste of your computer processing power.

Data Fragments

Logical data fragments occur when files are written, deleted, and rewritten to a hard drive or solid-state drive.

When files are deleted from a drive, the files actually still exist on the drive. However, the logical address on the Windows operating file system for those files is freed up for use. This means that "deleted" files remain on the logical drive until another file is written over it by reusing the address. (This also explains why it is possible to recover lost files). 

When an address is re-used, the likelihood that the new file is exactly the same length as the "deleted" file is remote. As a result, little chunks or fragments of data remaining from the "deleted" file remain on the logical drive. As a logical drive fills up, new files are sometimes broken up to fit into the available segments. At its worst, a logical fragmented drive contains both old fragments left over from deleted files (free space fragments) and new fragments that were intentionally created (data file fragments).

Logical data fragments can be a significant source of latency in a computer or server. Storing to, and retrieving from, a fragmented logical drive introduces additional steps in searching for and reassembling files around the fragments. For example, rather than reading a file in one or two I/Os, fragmentation can require hundreds, even thousands of I/Os to read or write that same data.

One way for how to improve latency from these logical data fragments is to defragment the logical drive by collecting data fragments and making them contiguous. The main disadvantages of defragmenting are that it must be repeated periodically because the logical drive will inevitably fragment again and also defragmenting SSDs can cause them to wear out prematurely.

A better method for how to improve latency from disk fragments is to prevent the logical disk from becoming fragmented. Diskeeper® 18 manages writes so that large, contiguous segments are kept together from the very start, thereby preventing the fragments from developing in the first place.

Limited Resources

No matter how "fast" the components of a computer are, they are still finite and tasks must be scheduled and performed in order. Certain tasks must be put off while more urgent tasks are executed. Although the latency in scheduling is often so short that it is unnoticeable, there will be times when limited resources cause enough of a delay that it hampers the computer or server.

For example, two specifications that are commonly used to define the speed of a computer are processor clock speed and instructions per cycle. Although these numbers climb steadily as technology advances, there will always be situations where the processor has too many tasks to execute and must delay some of them to get them all done.

Similarly, data buses and RAM have a particular speed. This speed limits the frequency with which data can be moved to the processor. These kinds of Input/output performance delays can reduce a system’s capacity by more than 50%.

One way to address latency is a method used by Diskeeper® 18. In this method, idle available DRAM is used to cache hot reads. By caching, it eliminates having to travel all the way to the storage infrastructure to read the data; and remember that DRAM can be 10x-15x faster than SSDs and even many factors more than HDDs. This allows faster retrieval of data; in fact, Windows systems can run faster than when new.

Reducing latency is mostly a matter of identifying the source of latencies and addressing them. By being proactive and preventing fragmentation before it happens and by caching hot reads using idle & available DRAM, Diskeeper® 18 makes Windows computers faster and more reliable.

 

What Condusiv’s Diskeeper Does for Me

by Tim Warner, Microsoft Cloud & Datacenter MVP 4. November 2019 05:36

I'm a person who uses what he has. For example, my friend Roger purchased a fancy new keyboard, but only uses it on "special occasions" because he wants to keep the hardware in pristine condition. That isn't me--my stuff wears out because I rely on it and use it continuously.

To this point, the storage on my Windows 10 workstation computer takes a heavy beating because I read and write data to my hard drives every single day. I have quite an assortment of fixed drives on this machine:

·         mechanical hard disk drive (HDD)

·         "no moving parts" solid state drive (SSD)

·         hybrid SSD/HDD drive

Today I'd like to share with you some ways that Condusiv’s Diskeeper helps me stay productive. Trust me--I'm no salesperson. Condusiv isn't paying me to write this article. My goal is to share my experience with you so you have something to think about in terms of disk optimization options for your server and workstation storage.

Diskeeper® or SSDKeeper®?

I've used Diskeeper on my servers and workstations since 2000. How time flies! A few years ago it confused me when Condusiv released SSDkeeper, their SSD optimization tool that works by optimizing your data as its written to disk.

Specifically, my confusion lay in the fact you can't have Diskeeper and SSDkeeper installed on the same machine simultaneously. As you saw, I almost always have a mixture of HDD and SSD drives. What am I losing by installing either Diskeeper or SSDkeeper, but not both?

You lose nothing because Diskeeper and SSDkeeper share most of the same features. Diskeeper like SSDkeeper can optimize solid-state disks using IntelliMemory Caching and IntelliWrite, and SSDkeeper like Diskeeper can optimize magnetic disks using Instant Defrag. Both products can automatically determine the storage type and apply the optimal technology.

Thus, your decision of whether to purchase Diskeeper or SSDkeeper is based on which technology the majority of your disks use, either HDD or SSD.

Allow me to explain what those three product features mean in practice:

·         IntelliMemory®: Uses unallocated system random access memory (RAM) for disk read caching

·         IntelliWrite®: Prevents fragmentation in the first place by writing data sequentially to your hard drive as its

created

·         Instant Defrag™: Uses a Windows service to perform "just in time" disk defragmentation

In Diskeeper, click Settings > System > Basic to verify you're taking advantage of these features. I show you the interface in Figure 1.

 

Figure 1. Diskeeper settings.

What about external drives?

In Condusiv's Top FAQs document you'll note that Diskeeper no longer supports external drives. Their justification for this decision is that their customers generally do not use external USB drives for high performance, input/output (I/O) intensive applications.

If you want to run optimization on external drives, you can do that graphically with the Optimize Drives Windows 10 utility, or you can run defrag.exe from an elevated command prompt.

For example, here I am running a fragmentation analysis on my H: volume, an external SATA HDD:

PS C:\users\tim> defrag H: /A

Microsoft Drive Optimizer

Copyright (c) Microsoft Corp.

Invoking analysis on TWARNER1 (H:)...

The operation completed successfully.

Post Defragmentation Report:

         Volume Information:

                Volume size                 = 1.81 TB

                Free space                  = 1.44 TB

                Total fragmented space      = 0%

                Largest free space size     = 1.43 TB

         Note: File fragments larger than 64MB are not included in the fragmentation statistics.

         You do not need to defragment this volume.

PS C:\users\tim>              

Let's look at the numbers!

All Condusiv products make it simple to perform benchmark analyses and run progress reports, and Diskeeper is no exception to this rule. Look at Figure 2--since I rebuilt my Windows 10 workstation and installed Diskeeper in July 2018, I've saved over 20 days of storage I/O time!

Figure 2. Diskeeper dashboard.

Those impressive I/O numbers don't strain credulity when you remember that Diskeeper aggregates I/O values across all my fixed drives, not only one. This time saving is the chief benefit Diskeeper gives me as a working IT professional. The tool gives me back seconds that otherwise I'd spend waiting on disk operations to complete; I then can use that time for productive work instead.

Even more to the point, Diskeeper does this work for me in the background, without my having to remember to run or even schedule defragmentation and optimization jobs. I'm a huge fan of Diskeeper, and I hope you will be.

Recommendation

Condusiv offers a free 30-day trial that you can download and see how much time it can save you:

Diskeeper 30-day trial

SSDkeeper 30-day trial

Note: If you have a virtual environment, you can download a 30-day trial of Condusiv’s V-locity (you can also see my review of V-locity 7).

 

Timothy Warner is a Microsoft Most Valuable Professional (MVP) in Cloud and Datacenter Management who is based in Nashville, TN. His professional specialties include Microsoft Azure, cross-platform PowerShell, and all things Windows Server-related. You can reach Tim via Twitter (@TechTrainerTim), LinkedIn or his website, techtrainertim.com.

  

Case Study: Non-Profit Eliminates Frustrating Help Desk calls, Boosts Performance and Extends Useful Hardware Lifecycle

by Marissa Newman 9. September 2019 11:47

When PathPoint was faced with user complaints and productivity issues related to slow performance, the non-profit organization turned to Condusiv’s I/O reduction software to not only optimize their physical and virtual infrastructure but to extend their hardware lifecycles, as well. 

As technology became more relevant to PathPoint’s growing organization and mission of providing people with disabilities and young adults the skills and resources to set them up for success, the IT team had to find a solution to make the IT infrastructure as efficient as possible. That’s when the organization looked into Diskeeper® as a solution for their physical servers and desktops.

“Now when we are configuring our workstations and laptops, the first thing we do is install Diskeeper. We have several lab computers that we don’t put the software on and the difference is obvious in day-to-day functionality. Diskeeper has essentially eliminated all helpdesk calls related to sluggish performance.” reported Curt Dennett, PathPoint’s VP of Technology and Infrastructure.

Curt also found that workstations with Diskeeper installed have a 5-year lifecycle versus the lab computers without Diskeeper that only last 3 years and he found similar results on his physical servers that are running full production workloads. Curt observed, “We don’t need to re-format machines running Diskeeper nearly as often. As a result, we gained back valuable time for other important initiatives while securing peak performance and longevity out of our physical hardware assets. With limited budgets, that has truly put us at ease.”

When PathPoint expanded into the virtual realm, Curt looked at V-locity® for their VM’s and, after reviewing the benefits, brought the software into the rest of their environment. The organization found that with the powerful capabilities of Diskeeper and V-locity, they were able to offload 47% of I/O traffic from storage, resulting in a much faster experience for their users.

The use of V-locity and Diskeeper is now the standard for PathPoint. Curt concluded, “The numbers are impressive but what’s more for me, is the gut feeling and the experience of knowing that the machines are actually performing efficiently. I wouldn’t run any environment without these tools.”

 

Read the full case study

 

Try V-locity FREE for yourself – no reboot is needed

SysAdmins Discover That Size Really Does Matter

by Spencer Allingham 25. April 2019 03:53

(...to storage transfer speeds...)

 

I was recently asked what could be done to maximize storage transfer speeds in physical and virtual Windows servers. Not the "sexiest" topic for a blog post, I know, but it should be interesting reading for any SysAdmin who wants to get the most performance from their IT environment, or for those IT Administrators who suffer from user or customer complaints about system performance.

 

As it happens, I had just completed some testing on this very subject and thought it would be helpful to share the results publicly in this article.

The crux of the matter comes down to storage I/O size and its effect on data transfer speeds. You can see in this set of results using an NVME-connected SSD (Samsung MZVKW1T0HMLH Model SM961), that the read and write transfer speeds, or put another way, how much data can be transferred each second is MUCH less when the storage I/O sizes are below 64 KB in size:

 

You can see that whilst the transfer rate maxes out at around 1.5 GB per second for writes and around 3.2 GB per second for reads, when the storage I/O sizes are smaller, you don’t see disk transfer speeds at anywhere near that maximum rate. And that’s okay if you’re only saving 4 KB or 8 KB of data, but is definitely NOT okay if you are trying to write a larger amount of data, say 128 KB or a couple of megabytes, and the Windows OS is breaking that down into smaller I/O packets in the background and transferring to and from disk at those much slower transfer rates. This happens way too often and means that the Windows OS is dampening efficiency and transferring your data at a much slower transfer rate than it could, or it should. That can have a very negative impact on the performance of your most important applications, and yes, they are probably the ones that users are accessing the most and are most likely to complain about.

 

The good news of course, is that the V-locity® software from Condusiv® Technologies is designed to prevent these split I/O situations in Windows virtual machines, and Diskeeper® will do the same for physical Windows systems. Installing Condusiv’s software is a quick, easy and effective fix as there is no disruption, no code changes required and no reboots. Just install our software and you are done!

You can even run this test for yourself on your own machine. Download a free copy of ATTO Disk Benchmark from The Web and install it. You can then click its Start button to quickly get a benchmark of how quickly YOUR system processes data transfer speeds at different sizes. I bet you quickly see that when it comes to data transfer speeds, size really does matter!

Out of interest, I enabled our Diskeeper software (I could have used V-locity instead) so that our RAM caching would assist the speed of the read I/O traffic, and the results were pretty amazing. Instead of the reads maxing out at around 3.2 GB per second, they were now maxing out at around a whopping 11 GB per second, more than three times faster. In fact, the ATTO Disk Benchmark software had to change the graph scale for the transfer rate (X-axis) from 4 GB/s to 20 GB/s, just to accommodate the extra GBs per second when the RAM cache was in play. Pretty cool, eh?

 

Of course, it is unrealistic to expect our software’s RAM cache to satisfy ALL of the read I/O traffic in a real live environment as with this lab test, but even if you satisfied only 25% of the reads from RAM in this manner, it certainly wouldn’t hurt performance!!!

If you want to see this for yourself on one of your computers, download the ATTO Disk Benchmark tool from The Web, if you haven’t already, and as mentioned before, run it to get a benchmark for your machine. Then download and install a free trial copy of Diskeeper for physical clients or servers, or V-locity for virtual machines from www.condusiv.com/try and run the ATTO Disk Benchmark tool several times. It will probably take a few runs of the test, but you should easily see the point at which the telemetry in Condusiv’s software identifies the correct data to satisfy from the RAM cache, as the read transfer rates will increase dramatically. They are no longer being confined to the speed of your disk storage, but instead are now happening at the speed of RAM. Much faster, even if that disk storage IS an NVME-connected SSD. And yes, if you’re wondering, this does work with SAN storage and all levels of RAID too!

NOTE: Before testing, make sure you have enough “unused” RAM to cache with. A minimum of 4 GB to 6 GB of Available Physical Memory is perfect.

Whether you have spinning hard drives or SSDs in your storage array, the boost in read data transfer rates can make a real difference. Whatever storage you have serving YOUR Windows computers, it just doesn’t make sense to allow the Windows operating system to continue transferring data at a slower speed than it should. Now with easy to install, “Set It and Forget It®” software from Condusiv Technologiesyou can be sure that you’re getting all of the speed and performance you paid for when you purchased your equipment, through larger, more sequential storage I/O and the benefit of intelligent RAM caching.

If you’re still not sure, run the tests for yourself and see.

Size DOES matter!

Thinking Outside the Box - How to Dramatically Improve SQL Performance, Part 1

by Howard Butler 3. April 2019 04:10

If you are reading this article, then most likely you are about to evaluate V-locity® or Diskeeper® on a SQL Server (or already have our software installed on a few servers) and have some questions about why it is a best practice recommendation to place a memory limit on SQL Servers in order to get the best performance from that server once you’ve installed one of our solutions.

To give our products a fair evaluation, there are certain best practices we recommend you follow.  Now, while it is true most servers already have enough memory and need no adjustments or additions, a select group of high I/O, high performance, or high demand servers, may need a little extra care to run at peak performance.

This article is specifically focused on those servers and the best-practice recommendations below for available memory. They are precisely targeted to those “work-horse” servers.  So, rest assured you don’t need to worry about adding tons of memory to your environment for all your other servers.

One best practice we won’t dive into here, which will be covered in a separate article, is the idea of deploying our software solutions to other servers that share the workload of the SQL Server, such as App Servers or Web Servers that the data flows through.  However, in this article we will shine the spotlight on best practices for SQL Server memory limits.

We’ve sold over 100 million licenses in over 30 years of providing Condusiv® Technologies patented software.  As a result, we take a longer term and more global view of improving performance, especially with the IntelliMemory® caching component that is part of V-locity and Diskeeper. We care about maximizing overall performance knowing that it will ultimately improve application performance.  We have a significant number of different technologies that look for I/Os that we can eliminate out of the stream to the actual storage infrastructure.  Some of them look for inefficiencies caused at the file system level.  Others take a broader look at the disk level to optimize I/O that wouldn’t normally be visible as performance robbing.  We use an analytical approach to look for I/O reduction that gives the most bang for the buck.  This has evolved over the years as technology changes.  What hasn’t changed is our global and long-term view of actual application usage of the storage subsystem and maximizing performance, especially in ways that are not obvious.

Our software solutions eliminate I/Os to the storage subsystem that the database engine is not directly concerned with and as a result we can greatly improve the speed of I/Os sent to the storage infrastructure from the database engine.  Essentially, we dramatically lessen the number of competing I/Os that slow down the transaction log writes, updates, data bucket reads, etc.  If the I/Os that must go to storage anyway aren’t waiting for I/Os from other sources, they complete faster.  And, we do all of this with an exceptionally small amount of idle, free, unused resources, which would be hard pressed for anyone to even detect through our self-learning and dynamic nature of allocating and releasing resources depending on other system needs.

It’s common knowledge that SQL Server has specialized caches for the indexes, transaction logs, etc.  At a basic level the SQL Server cache does a good job, but it is also common knowledge that it’s not very efficient.  It uses up way too much system memory, is limited in scope of what it caches, and due to the incredible size of today’s data stores and indexes it is not possible to cache everything.  In fact, you’ve likely experienced that out of the box, SQL Server will grab onto practically all the available memory allocated to a system.

It is true that if SQL Server memory usage is left uncapped, there typically wouldn’t be enough memory for Condusiv’s software to create a cache with.  Hence, why we recommend you place a maximum memory usage in SQL Server to leave enough memory for IntelliMemory cache to help offload more of the I/O traffic.  For best results, you can easily cap the amount of memory that SQL Server consumes for its own form of caching or buffering.  At the end of this article I have included a link to a Microsoft document on how to set Max Server Memory for SQL as well as a short video to walk you through the steps.

A general rule of thumb for busy SQL database servers would be to limit SQL memory usage to keep at least 16 GB of memory free.  This would allow enough room for the IntelliMemory cache to grow and really make that machine’s performance 'fly' in most cases.  If you can’t spare 16 GB, leave 8 GB.  If you can’t afford 8 GB, leave 4 GB free.  Even that is enough to make a difference.  If you are not comfortable with reducing the SQL Server memory usage, then at least place a maximum value of what it typically uses and add 4-16 GB of additional memory to the system.  

We have intentionally designed our software so that it can’t compete for system resources with anything else that is running.  This means our software should never trigger a memory starvation situation.  IntelliMemory will only use some of the free or idle memory that isn’t being used by anything else, and will dynamically scale our cache up or down, handing memory back to Windows if other processes or applications need it.

Think of our IntelliMemory caching strategy as complementary to what SQL Server caching does, but on a much broader scale.  IntelliMemory caching is designed to eliminate the type of storage I/O traffic that tends to slow the storage down the most.  While that tends to be the smaller, more random read I/O traffic, there are often times many repetitive I/Os, intermixed with larger I/Os, which wreak havoc and cause storage bandwidth issues.  Also keep in mind that I/Os satisfied from memory are 10-15 times faster than going to flash.  

So, what’s the secret sauce?  We use a very lightweight storage filter driver to gather telemetry data.  This allows the software to learn useful things like:

- What are the main applications in use on a machine?
- What type of files are being accessed and what type of storage I/O streams are being generated?
- And, at what times of the day, the week, the month, the quarter? 

IntelliMemory is aware of the 'hot blocks' of data that need to be in the memory cache, and more importantly, when they need to be there.  Since we only load data we know you’ll reference in our cache, IntelliMemory is far more efficient in terms of memory usage versus I/O performance gains.  We can also use that telemetry data to figure out how best to size the storage I/O packets to give the main application the best performance.  If the way you use that machine changes over time, we automatically adapt to those changes, without you having to reconfigure or 'tweak' any settings.


Stayed tuned for the next in the series; Thinking Outside The Box Part 2 – Test vs. Real World Workload Evaluation.

 

Main takeaways:

- Most of the servers in your environment already have enough free and available memory and will need no adjustments of any kind.
- Limit SQL memory so that there is a minimum of 8 GB free for any server with more than 40 GB of memory and a minimum of 6 GB free for any server with 32 GB of memory.  If you have the room, leave 16 GB or more memory free for IntelliMemory to use for caching.
Another best practice is to deploy our software to all Windows servers that interact with the SQL Server.  More on this in a future article.

 

 

Microsoft Document – Server Memory Server Configuration Options

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

 

Short video – Best Practices for Available Memory for V-locity or Diskeeper

https://youtu.be/vwi7BRE58Io

At around the 3:00 minute mark, capping SQL Memory is demonstrated.

RecentComments

Comment RSS

Month List

Calendar

<<  January 2020  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar