Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

What Condusiv’s Diskeeper Does for Me

by Tim Warner, Microsoft Cloud & Datacenter MVP 4. November 2019 05:36

I'm a person who uses what he has. For example, my friend Roger purchased a fancy new keyboard, but only uses it on "special occasions" because he wants to keep the hardware in pristine condition. That isn't me--my stuff wears out because I rely on it and use it continuously.

To this point, the storage on my Windows 10 workstation computer takes a heavy beating because I read and write data to my hard drives every single day. I have quite an assortment of fixed drives on this machine:

·         mechanical hard disk drive (HDD)

·         "no moving parts" solid state drive (SSD)

·         hybrid SSD/HDD drive

Today I'd like to share with you some ways that Condusiv’s Diskeeper helps me stay productive. Trust me--I'm no salesperson. Condusiv isn't paying me to write this article. My goal is to share my experience with you so you have something to think about in terms of disk optimization options for your server and workstation storage.

Diskeeper® or SSDKeeper®?

I've used Diskeeper on my servers and workstations since 2000. How time flies! A few years ago it confused me when Condusiv released SSDkeeper, their SSD optimization tool that works by optimizing your data as its written to disk.

Specifically, my confusion lay in the fact you can't have Diskeeper and SSDkeeper installed on the same machine simultaneously. As you saw, I almost always have a mixture of HDD and SSD drives. What am I losing by installing either Diskeeper or SSDkeeper, but not both?

You lose nothing because Diskeeper and SSDkeeper share most of the same features. Diskeeper like SSDkeeper can optimize solid-state disks using IntelliMemory Caching and IntelliWrite, and SSDkeeper like Diskeeper can optimize magnetic disks using Instant Defrag. Both products can automatically determine the storage type and apply the optimal technology.

Thus, your decision of whether to purchase Diskeeper or SSDkeeper is based on which technology the majority of your disks use, either HDD or SSD.

Allow me to explain what those three product features mean in practice:

·         IntelliMemory®: Uses unallocated system random access memory (RAM) for disk read caching

·         IntelliWrite®: Prevents fragmentation in the first place by writing data sequentially to your hard drive as its

created

·         Instant Defrag™: Uses a Windows service to perform "just in time" disk defragmentation

In Diskeeper, click Settings > System > Basic to verify you're taking advantage of these features. I show you the interface in Figure 1.

 

Figure 1. Diskeeper settings.

What about external drives?

In Condusiv's Top FAQs document you'll note that Diskeeper no longer supports external drives. Their justification for this decision is that their customers generally do not use external USB drives for high performance, input/output (I/O) intensive applications.

If you want to run optimization on external drives, you can do that graphically with the Optimize Drives Windows 10 utility, or you can run defrag.exe from an elevated command prompt.

For example, here I am running a fragmentation analysis on my H: volume, an external SATA HDD:

PS C:\users\tim> defrag H: /A

Microsoft Drive Optimizer

Copyright (c) Microsoft Corp.

Invoking analysis on TWARNER1 (H:)...

The operation completed successfully.

Post Defragmentation Report:

         Volume Information:

                Volume size                 = 1.81 TB

                Free space                  = 1.44 TB

                Total fragmented space      = 0%

                Largest free space size     = 1.43 TB

         Note: File fragments larger than 64MB are not included in the fragmentation statistics.

         You do not need to defragment this volume.

PS C:\users\tim>              

Let's look at the numbers!

All Condusiv products make it simple to perform benchmark analyses and run progress reports, and Diskeeper is no exception to this rule. Look at Figure 2--since I rebuilt my Windows 10 workstation and installed Diskeeper in July 2018, I've saved over 20 days of storage I/O time!

Figure 2. Diskeeper dashboard.

Those impressive I/O numbers don't strain credulity when you remember that Diskeeper aggregates I/O values across all my fixed drives, not only one. This time saving is the chief benefit Diskeeper gives me as a working IT professional. The tool gives me back seconds that otherwise I'd spend waiting on disk operations to complete; I then can use that time for productive work instead.

Even more to the point, Diskeeper does this work for me in the background, without my having to remember to run or even schedule defragmentation and optimization jobs. I'm a huge fan of Diskeeper, and I hope you will be.

Recommendation

Condusiv offers a free 30-day trial that you can download and see how much time it can save you:

Diskeeper 30-day trial

SSDkeeper 30-day trial

Note: If you have a virtual environment, you can download a 30-day trial of Condusiv’s V-locity (you can also see my review of V-locity 7).

 

Timothy Warner is a Microsoft Most Valuable Professional (MVP) in Cloud and Datacenter Management who is based in Nashville, TN. His professional specialties include Microsoft Azure, cross-platform PowerShell, and all things Windows Server-related. You can reach Tim via Twitter (@TechTrainerTim), LinkedIn or his website, techtrainertim.com.

  

Do I Really Need V-locity on All VMs?

by Rick Cadruvi, Chief Architect 15. August 2019 04:12

V-locity® customers may wonder, “How many VMs do I need to install V-locity on for optimal results? What kind of benefits will I see with V-locity on one or two VMs versus all the VMs on a host?” 

As a refresher…

It is true that V-locity will likely provide significant benefit on that one VM.  It may even be extraordinary.  But loading V-locity on just one VM on a host with sometimes dozens of VMs won’t give you the biggest bang for your buck. V-locity includes many technologies that address storage performance issues in an extremely intelligent manner.  Part of the underlying design is to learn about the specific loads your system has and intelligently adapt to each specific environment presented to it.   That’s why we created V-locity especially for virtual environments in the first place. 

As you have experienced, the beauty of V-locity is its ability to deal with the I/O Blender Effect.  When there are multiple VMs on a host, or multiple hosts with VMs that use the same back-end storage system (e.g., a SAN) a “blender” effect occurs when all these VMs are sending I/O requests up and down the stack.  As you can guess, it can create huge performance bottlenecks. In fact, perhaps the most significant issue that virtualized environments face is the fact that there are MANY performance chokepoints in the ecosystem, especially the storage subsystem.  These chokepoints are robbing 30-50% of your throughput.  This is the dark side of virtualized systems. 

Look at it this way.  VM “A” may have different resource requirements than VM “B” and so on.  Besides performing different tasks with different workloads, they may have different peak usage periods.  What happens when those peaks overlap?  Worse yet, what happens if several of your VMs have very similar resource requirements and workloads that constantly overlap? 

 

The answer is that the I/O Blender Effect takes over and now VM “A” is competing directly with VM “B” and VM “C” and so on.  The blender pours all those resource desires into a funnel, creating bottlenecks with unpredictable performance results.  What is predictable is that performance will suffer, and likely a LOT.

V-locity was designed from the ground up to intelligently deal with these core issues.  The guiding question in front of us as it was being designed and engineered, was: 

Given your workload and resources, how can V-locity help you overcome the I/O Blender Effect? 

By making sure that V-locity will adapt to your specific workload and having studied what kinds of I/Os amplify the I/O Blender Effect, we were able to add intelligence to specifically go after those I/Os.  We take a global view.  We aren’t limited to a specific application or workload.  While we do have technologies that shine under certain workloads, such as transactional SQL applications, our goal is to optimize the entire ecosystem.  That’s the only way to overcome the I/O Blender Effect.

So, while we can indeed give you great gains on a single VM, V-locity truly gets to shine and show off its purpose when it can intelligently deal with the chokepoints that create the I/O Blender Effect.  That means you should add V-locity to ALL your VMs.  With our no-reboot installation and a V-locity Management Console, it’s fast and easy to cover and manage your environment.

If you have V-locity on all the VMs on your host(s), let us know how it is going! If you don’t yet, contact your account manager who can get you set up!

For an in-depth refresher,  watch our 10-min whiteboard video

 

Thinking Outside the Box - How to Dramatically Improve SQL Performance, Part 1

by Howard Butler 3. April 2019 04:10

If you are reading this article, then most likely you are about to evaluate V-locity® or Diskeeper® on a SQL Server (or already have our software installed on a few servers) and have some questions about why it is a best practice recommendation to place a memory limit on SQL Servers in order to get the best performance from that server once you’ve installed one of our solutions.

To give our products a fair evaluation, there are certain best practices we recommend you follow.  Now, while it is true most servers already have enough memory and need no adjustments or additions, a select group of high I/O, high performance, or high demand servers, may need a little extra care to run at peak performance.

This article is specifically focused on those servers and the best-practice recommendations below for available memory. They are precisely targeted to those “work-horse” servers.  So, rest assured you don’t need to worry about adding tons of memory to your environment for all your other servers.

One best practice we won’t dive into here, which will be covered in a separate article, is the idea of deploying our software solutions to other servers that share the workload of the SQL Server, such as App Servers or Web Servers that the data flows through.  However, in this article we will shine the spotlight on best practices for SQL Server memory limits.

We’ve sold over 100 million licenses in over 30 years of providing Condusiv® Technologies patented software.  As a result, we take a longer term and more global view of improving performance, especially with the IntelliMemory® caching component that is part of V-locity and Diskeeper. We care about maximizing overall performance knowing that it will ultimately improve application performance.  We have a significant number of different technologies that look for I/Os that we can eliminate out of the stream to the actual storage infrastructure.  Some of them look for inefficiencies caused at the file system level.  Others take a broader look at the disk level to optimize I/O that wouldn’t normally be visible as performance robbing.  We use an analytical approach to look for I/O reduction that gives the most bang for the buck.  This has evolved over the years as technology changes.  What hasn’t changed is our global and long-term view of actual application usage of the storage subsystem and maximizing performance, especially in ways that are not obvious.

Our software solutions eliminate I/Os to the storage subsystem that the database engine is not directly concerned with and as a result we can greatly improve the speed of I/Os sent to the storage infrastructure from the database engine.  Essentially, we dramatically lessen the number of competing I/Os that slow down the transaction log writes, updates, data bucket reads, etc.  If the I/Os that must go to storage anyway aren’t waiting for I/Os from other sources, they complete faster.  And, we do all of this with an exceptionally small amount of idle, free, unused resources, which would be hard pressed for anyone to even detect through our self-learning and dynamic nature of allocating and releasing resources depending on other system needs.

It’s common knowledge that SQL Server has specialized caches for the indexes, transaction logs, etc.  At a basic level the SQL Server cache does a good job, but it is also common knowledge that it’s not very efficient.  It uses up way too much system memory, is limited in scope of what it caches, and due to the incredible size of today’s data stores and indexes it is not possible to cache everything.  In fact, you’ve likely experienced that out of the box, SQL Server will grab onto practically all the available memory allocated to a system.

It is true that if SQL Server memory usage is left uncapped, there typically wouldn’t be enough memory for Condusiv’s software to create a cache with.  Hence, why we recommend you place a maximum memory usage in SQL Server to leave enough memory for IntelliMemory cache to help offload more of the I/O traffic.  For best results, you can easily cap the amount of memory that SQL Server consumes for its own form of caching or buffering.  At the end of this article I have included a link to a Microsoft document on how to set Max Server Memory for SQL as well as a short video to walk you through the steps.

A general rule of thumb for busy SQL database servers would be to limit SQL memory usage to keep at least 16 GB of memory free.  This would allow enough room for the IntelliMemory cache to grow and really make that machine’s performance 'fly' in most cases.  If you can’t spare 16 GB, leave 8 GB.  If you can’t afford 8 GB, leave 4 GB free.  Even that is enough to make a difference.  If you are not comfortable with reducing the SQL Server memory usage, then at least place a maximum value of what it typically uses and add 4-16 GB of additional memory to the system.  

We have intentionally designed our software so that it can’t compete for system resources with anything else that is running.  This means our software should never trigger a memory starvation situation.  IntelliMemory will only use some of the free or idle memory that isn’t being used by anything else, and will dynamically scale our cache up or down, handing memory back to Windows if other processes or applications need it.

Think of our IntelliMemory caching strategy as complementary to what SQL Server caching does, but on a much broader scale.  IntelliMemory caching is designed to eliminate the type of storage I/O traffic that tends to slow the storage down the most.  While that tends to be the smaller, more random read I/O traffic, there are often times many repetitive I/Os, intermixed with larger I/Os, which wreak havoc and cause storage bandwidth issues.  Also keep in mind that I/Os satisfied from memory are 10-15 times faster than going to flash.  

So, what’s the secret sauce?  We use a very lightweight storage filter driver to gather telemetry data.  This allows the software to learn useful things like:

- What are the main applications in use on a machine?
- What type of files are being accessed and what type of storage I/O streams are being generated?
- And, at what times of the day, the week, the month, the quarter? 

IntelliMemory is aware of the 'hot blocks' of data that need to be in the memory cache, and more importantly, when they need to be there.  Since we only load data we know you’ll reference in our cache, IntelliMemory is far more efficient in terms of memory usage versus I/O performance gains.  We can also use that telemetry data to figure out how best to size the storage I/O packets to give the main application the best performance.  If the way you use that machine changes over time, we automatically adapt to those changes, without you having to reconfigure or 'tweak' any settings.


Stayed tuned for the next in the series; Thinking Outside The Box Part 2 – Test vs. Real World Workload Evaluation.

 

Main takeaways:

- Most of the servers in your environment already have enough free and available memory and will need no adjustments of any kind.
- Limit SQL memory so that there is a minimum of 8 GB free for any server with more than 40 GB of memory and a minimum of 6 GB free for any server with 32 GB of memory.  If you have the room, leave 16 GB or more memory free for IntelliMemory to use for caching.
Another best practice is to deploy our software to all Windows servers that interact with the SQL Server.  More on this in a future article.

 

 

Microsoft Document – Server Memory Server Configuration Options

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

 

Short video – Best Practices for Available Memory for V-locity or Diskeeper

https://youtu.be/vwi7BRE58Io

At around the 3:00 minute mark, capping SQL Memory is demonstrated.

SQL Server Database Performance

by Dawn Richcreek 15. March 2019 07:42

How do I get the most performance from my SQL Server?

SQL Server applications are typically the most I/O intensive applications for any enterprise and thus are prone to suffer performance degradation. Anything a database administrator can do to reduce the amount of I/O necessary to complete a task will increase the server’s performance of the application.

Excess and noisy I/O has typically been found to be the root cause of numerous SQL performance problems such as:SQL Server

  • -SQL query timeouts
  • -SQL crashes
  • -SQL latency
  • -Slow data transfers
  • -Slow or sluggish SQL based-applications
  • -Reports taking too long
  • -Back office batch jobs bleeding over into production hours
  • -User complaints; users having to wait for data

 

Some of the most common actions DBAs often resort to are:

  • -Tuning queries to minimize the amount of data returned.
  • -Adding extra spindles or flash for performance
  • -Increased RAM
  • -Index maintenance to improve read and/or write performance.

 

Most performance degradation is a software problem that can be solved by software

None of these actions will prevent hardware bottlenecks that occur due to the FACT that 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019).

 

Two Server I/O Inefficiencies



IO StreamAs the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment that becomes even worse in a virtual environment due to the “I/O blender effect”.



IO StreamThis is what a healthy I/O stream SHOULD look like in order to get optimum performance from your hardware infrastructure. With a nice healthy relationship between I/O and data, you get clean contiguous writes and reads with every I/O operation.

 

 

 

 

 

Return Optimum Performance – Solve the Root Cause, Instantly

 

Condusiv®’s patented solutions address root cause performance issues at the point of origin where I/O is created by ensuring large, clean contiguous writes from Windows to eliminate the “death by a thousand cuts” scenario of many small writes and reads that chew up performance. Condusiv solutions electrify performance of windows servers even further with the addition of DRAM caching – using idle, unused DRAM to serve hot reads without creating an issue of memory contention or resource starvation. Condusiv’s “Set It and Forget It” software optimizes both writes and reads to solve your toughest application performance challenges Video: Condusiv I/O Reduction Software Overview.

 

Lab Test Results with V-locity I/O reduction software installed

labtest 

 

Best Practice Tips to Boost SQL Performance with V-locity

 

 

By following the best practices outlined here, users can achieve a 2X or faster boost in MS-SQL performance with Condusiv’s V-locity® I/O reduction software.

-Provision an additional 4-16GB of memory to the SQL Server if you have additional memory to give 

-Cap MS-SQL memory usage, leaving the additional memory for the OS and our software. Note - Condusiv software will leverage whatever is unused by the OS 

-If no additional memory to add, cap SQL memory usage leaving 8GB for the OS and our software Note – This may not achieve 2X gains but will likely boost performance 30-50% as SQL is highly inefficient with its memory usage

-Download and install the software – condusiv.com/try. No SQL code changes needed. No reboot required. Note - Allow 24 hours for algorithms to adjust.

-After a few days in production, pull up the dashboard and look for a 50% reduction in I/O traffic to storage

Note – if offloading less than 50% of I/O traffic, consider adding more memory for the software to leverage and watch the benefit rise on read heavy apps.

The Challenge of IT Cost vs Performance

by Jim D’Arezzo, CEO 19. February 2019 06:26

In over 30 years in the IT business, I can count on one hand the number of times I’ve heard an IT manager say, “The budget is not a problem. Cost is no object.”

It is as true today as it was 30 years ago.  That is, increasing pressure on the IT infrastructure, rising data loads and demands for improved performance are pitted against tight budgets.  Frankly, I’d say it’s gotten worse – it’s kind of a good news/bad news story. 

The good news is there is far more appreciation of the importance of IT management and operations than ever before.  CIOs now report to the CEO in many organizations; IT and automation have become an integral part of business; and of course, everyone is a heavy tech user on the job and in private life as well. 

The bad news is the demand for end-user performance has skyrocketed; the amount of data processed has exploded; and the growing number of uses (read: applications) of data is like a rising tide threatening to swamp even the most well-staffed and richly financed IT organizations.

The balance between keeping IT operations up and continuously serving the end-user community while keeping costs manageable is quite a trick these days.  Capital expenditures on new hardware and infrastructure and Operational expenditures on personnel, subscriptions, cloud-based service or managed service providers can become a real dilemma for IT management. 

An IT executive must be attuned to changes in technology, changes in his/her own business and the changing nature of the existing infrastructure as the manager tries to extend the maximum life of equipment. 

Performance demands keep IT professionals awake at night.  The hard truth is the dreaded 2:00 a.m. call regarding a crashed server or network operation, or the halt of operations during a critical business period (think end of year closing, peak sales season, or inventory cycle) reveals that in many IT organizations, they’re holding on by the skin of their teeth.

Condusiv has been in the business of improving the performance of Windows systems for 30 years.  We’ve seen it all.  One of the biggest mistakes an IT decision-maker can make is to go along with the “common wisdom” (primarily pushed by hardware manufacturers) that the only way to improve system and application performance is to buy new hardware.  Certainly, at some point hardware upgrades are necessary, but the fact is, some 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019. Also see earlier article Windows is Still Windows).  Don’t get me wrong, Windows is an amazing solution used by some 80% of all systems on the planet.  But as the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment.  Add a growing number of virtualized systems into the mix and you really create overhead (you may have even heard of the “I/O blender effect”).  The bottom line: much of performance degradation is a software problem that can be solved by software.  So, rather than buying a “forklift upgrade” of new hardware, our customers are offloading 30-50% or more of their I/O which dramatically improves performance.  By simply adding our patented software, our customers avoid the disruption of migrating to new systems, rip and replacement, end-user training and the rest of that challenge. 

Yes, the above paragraph could be considered a pitch for our software, but the fact is, we’ve sold over 100 million copies of our products to help IT professionals get some sleep at night.  We’re the world leader in I/O reduction. We improve system performance an average of 30-50% or more (often far more).  Our products are non-disruptive to the point that we even trademarked the term “Set It and Forget It®”.  We’re proud of that, and the help we’re providing to the IT community.

 

 

To try for yourself, download a free, 30-day trial version (no reboot required) at www.condusiv.com/try

RecentComments

Comment RSS

Month List

Calendar

<<  December 2019  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar