Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Thinking Outside the Box - How to Dramatically Improve SQL Performance, Part 1

by Howard Butler 3. April 2019 04:10

If you are reading this article, then most likely you are about to evaluate V-locity® or Diskeeper® on a SQL Server (or already have our software installed on a few servers) and have some questions about why it is a best practice recommendation to place a memory limit on SQL Servers in order to get the best performance from that server once you’ve installed one of our solutions.

To give our products a fair evaluation, there are certain best practices we recommend you follow.  Now, while it is true most servers already have enough memory and need no adjustments or additions, a select group of high I/O, high performance, or high demand servers, may need a little extra care to run at peak performance.

This article is specifically focused on those servers and the best-practice recommendations below for available memory. They are precisely targeted to those “work-horse” servers.  So, rest assured you don’t need to worry about adding tons of memory to your environment for all your other servers.

One best practice we won’t dive into here, which will be covered in a separate article, is the idea of deploying our software solutions to other servers that share the workload of the SQL Server, such as App Servers or Web Servers that the data flows through.  However, in this article we will shine the spotlight on best practices for SQL Server memory limits.

We’ve sold over 100 million licenses in over 30 years of providing Condusiv® Technologies patented software.  As a result, we take a longer term and more global view of improving performance, especially with the IntelliMemory® caching component that is part of V-locity and Diskeeper. We care about maximizing overall performance knowing that it will ultimately improve application performance.  We have a significant number of different technologies that look for I/Os that we can eliminate out of the stream to the actual storage infrastructure.  Some of them look for inefficiencies caused at the file system level.  Others take a broader look at the disk level to optimize I/O that wouldn’t normally be visible as performance robbing.  We use an analytical approach to look for I/O reduction that gives the most bang for the buck.  This has evolved over the years as technology changes.  What hasn’t changed is our global and long-term view of actual application usage of the storage subsystem and maximizing performance, especially in ways that are not obvious.

Our software solutions eliminate I/Os to the storage subsystem that the database engine is not directly concerned with and as a result we can greatly improve the speed of I/Os sent to the storage infrastructure from the database engine.  Essentially, we dramatically lessen the number of competing I/Os that slow down the transaction log writes, updates, data bucket reads, etc.  If the I/Os that must go to storage anyway aren’t waiting for I/Os from other sources, they complete faster.  And, we do all of this with an exceptionally small amount of idle, free, unused resources, which would be hard pressed for anyone to even detect through our self-learning and dynamic nature of allocating and releasing resources depending on other system needs.

It’s common knowledge that SQL Server has specialized caches for the indexes, transaction logs, etc.  At a basic level the SQL Server cache does a good job, but it is also common knowledge that it’s not very efficient.  It uses up way too much system memory, is limited in scope of what it caches, and due to the incredible size of today’s data stores and indexes it is not possible to cache everything.  In fact, you’ve likely experienced that out of the box, SQL Server will grab onto practically all the available memory allocated to a system.

It is true that if SQL Server memory usage is left uncapped, there typically wouldn’t be enough memory for Condusiv’s software to create a cache with.  Hence, why we recommend you place a maximum memory usage in SQL Server to leave enough memory for IntelliMemory cache to help offload more of the I/O traffic.  For best results, you can easily cap the amount of memory that SQL Server consumes for its own form of caching or buffering.  At the end of this article I have included a link to a Microsoft document on how to set Max Server Memory for SQL as well as a short video to walk you through the steps.

A general rule of thumb for busy SQL database servers would be to limit SQL memory usage to keep at least 16 GB of memory free.  This would allow enough room for the IntelliMemory cache to grow and really make that machine’s performance 'fly' in most cases.  If you can’t spare 16 GB, leave 8 GB.  If you can’t afford 8 GB, leave 4 GB free.  Even that is enough to make a difference.  If you are not comfortable with reducing the SQL Server memory usage, then at least place a maximum value of what it typically uses and add 4-16 GB of additional memory to the system.  

We have intentionally designed our software so that it can’t compete for system resources with anything else that is running.  This means our software should never trigger a memory starvation situation.  IntelliMemory will only use some of the free or idle memory that isn’t being used by anything else, and will dynamically scale our cache up or down, handing memory back to Windows if other processes or applications need it.

Think of our IntelliMemory caching strategy as complementary to what SQL Server caching does, but on a much broader scale.  IntelliMemory caching is designed to eliminate the type of storage I/O traffic that tends to slow the storage down the most.  While that tends to be the smaller, more random read I/O traffic, there are often times many repetitive I/Os, intermixed with larger I/Os, which wreak havoc and cause storage bandwidth issues.  Also keep in mind that I/Os satisfied from memory are 10-15 times faster than going to flash.  

So, what’s the secret sauce?  We use a very lightweight storage filter driver to gather telemetry data.  This allows the software to learn useful things like:

- What are the main applications in use on a machine?
- What type of files are being accessed and what type of storage I/O streams are being generated?
- And, at what times of the day, the week, the month, the quarter? 

IntelliMemory is aware of the 'hot blocks' of data that need to be in the memory cache, and more importantly, when they need to be there.  Since we only load data we know you’ll reference in our cache, IntelliMemory is far more efficient in terms of memory usage versus I/O performance gains.  We can also use that telemetry data to figure out how best to size the storage I/O packets to give the main application the best performance.  If the way you use that machine changes over time, we automatically adapt to those changes, without you having to reconfigure or 'tweak' any settings.


Stayed tuned for the next in the series; Thinking Outside The Box Part 2 – Test vs. Real World Workload Evaluation.

 

Main takeaways:

- Most of the servers in your environment already have enough free and available memory and will need no adjustments of any kind.
- Limit SQL memory so that there is a minimum of 8 GB free for any server with more than 40 GB of memory and a minimum of 6 GB free for any server with 32 GB of memory.  If you have the room, leave 16 GB or more memory free for IntelliMemory to use for caching.
Another best practice is to deploy our software to all Windows servers that interact with the SQL Server.  More on this in a future article.

 

 

Microsoft Document – Server Memory Server Configuration Options

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

 

Short video – Best Practices for Available Memory for V-locity or Diskeeper

https://youtu.be/vwi7BRE58Io

At around the 3:00 minute mark, capping SQL Memory is demonstrated.

The Challenge of IT Cost vs Performance

by Jim D’Arezzo, CEO 19. February 2019 06:26

In over 30 years in the IT business, I can count on one hand the number of times I’ve heard an IT manager say, “The budget is not a problem. Cost is no object.”

It is as true today as it was 30 years ago.  That is, increasing pressure on the IT infrastructure, rising data loads and demands for improved performance are pitted against tight budgets.  Frankly, I’d say it’s gotten worse – it’s kind of a good news/bad news story. 

The good news is there is far more appreciation of the importance of IT management and operations than ever before.  CIOs now report to the CEO in many organizations; IT and automation have become an integral part of business; and of course, everyone is a heavy tech user on the job and in private life as well. 

The bad news is the demand for end-user performance has skyrocketed; the amount of data processed has exploded; and the growing number of uses (read: applications) of data is like a rising tide threatening to swamp even the most well-staffed and richly financed IT organizations.

The balance between keeping IT operations up and continuously serving the end-user community while keeping costs manageable is quite a trick these days.  Capital expenditures on new hardware and infrastructure and Operational expenditures on personnel, subscriptions, cloud-based service or managed service providers can become a real dilemma for IT management. 

An IT executive must be attuned to changes in technology, changes in his/her own business and the changing nature of the existing infrastructure as the manager tries to extend the maximum life of equipment. 

Performance demands keep IT professionals awake at night.  The hard truth is the dreaded 2:00 a.m. call regarding a crashed server or network operation, or the halt of operations during a critical business period (think end of year closing, peak sales season, or inventory cycle) reveals that in many IT organizations, they’re holding on by the skin of their teeth.

Condusiv has been in the business of improving the performance of Windows systems for 30 years.  We’ve seen it all.  One of the biggest mistakes an IT decision-maker can make is to go along with the “common wisdom” (primarily pushed by hardware manufacturers) that the only way to improve system and application performance is to buy new hardware.  Certainly, at some point hardware upgrades are necessary, but the fact is, some 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019. Also see earlier article Windows is Still Windows).  Don’t get me wrong, Windows is an amazing solution used by some 80% of all systems on the planet.  But as the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment.  Add a growing number of virtualized systems into the mix and you really create overhead (you may have even heard of the “I/O blender effect”).  The bottom line: much of performance degradation is a software problem that can be solved by software.  So, rather than buying a “forklift upgrade” of new hardware, our customers are offloading 30-50% or more of their I/O which dramatically improves performance.  By simply adding our patented software, our customers avoid the disruption of migrating to new systems, rip and replacement, end-user training and the rest of that challenge. 

Yes, the above paragraph could be considered a pitch for our software, but the fact is, we’ve sold over 100 million copies of our products to help IT professionals get some sleep at night.  We’re the world leader in I/O reduction. We improve system performance an average of 30-50% or more (often far more).  Our products are non-disruptive to the point that we even trademarked the term “Set It and Forget It®”.  We’re proud of that, and the help we’re providing to the IT community.

 

 

To try for yourself, download a free, 30-day trial version (no reboot required) at www.condusiv.com/try

Can you relate? 906 IT Pros Talk About I/O Performance, Application Troubles and More

by Dawn Richcreek 8. January 2019 03:44

We just completed our 5th annual I/O Performance Survey that was conducted with 906 IT Professionals. This is the industry’s largest study of its kind and the research highlights the latest trends in applications that are driving performance demands and how IT Professionals are responding.

I/O Growth Continues to Outpace Expectations

The results show that organizations are struggling to get the full lifecycle from their backend storage as the growth of I/O continues to outpace expectations. The research also shows that IT Pros continue to struggle with user complaints related to sluggish performance from their I/O intensive applications, especially citing MS-SQL applications.

Comprehensive Research Data

The survey consists of 27 detailed questions designed to identify the impact of I/O growth in the modern IT environment. In addition to multiple choice questions, the survey included optional open responses, allowing a respondent to provide commentary on why they selected a particular answer.  All the individual responses have been included to help readers dive deeply on any question. The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

Summary of Key Findings 

1.    I/O Performance is important to IT Pros: The vast majority of IT Pros consider I/O Performance an important part of their job responsibilities. Over a third of these note that growth of I/O from applications is outpacing the useful lifecycle they expect from their underlying storage. 

2.    Application performance is suffering: Half of the IT Pros responsible for I/O performance cite they currently have applications that are tough to support from a systems performance standpoint. The toughest applications stated were: SQL, SAP, Custom/Proprietary apps, Oracle, ERP, Exchange, Database, Data Warehouse, Dynamics, SharePoint, and EMR/EHR. See page 20 for a word cloud graphic. 

3.    SQL is the top troublesome application: The survey confirms that SQL databases are the top business critical application platform and is also the environment that generates the most storage I/O traffic. Nearly a third of the IT Pros responsible for I/O performance state that they are currently experiencing staff/customer complaints due to sluggish applications running on SQL. 

4.    Buying hardware has not solved the performance problems: Nearly three-fourths of IT Pros have added new hardware to improve I/O performance. They have purchased new servers with more cores, new all-flash arrays, new hybrid arrays, server-side SSDs, etc. and yet they still have concerns. In fact, a third have performance concerns that are preventing them from scaling their virtualized infrastructures.  

5.    Still planning to buy hardware: About three-fourths of IT Pros are still planning to continue to invest in hardware to improve I/O performance. 

6.    Lack of awareness: Over half of respondents were unaware of the fact that Windows write inefficiencies generate increasingly smaller writes and reads that dampen performance and that this is a software problem that is not solved by adding new hardware. 

7.    Improve performance via software to avoid expensive hardware purchase: The vast majority of respondents felt it would be urgent/important to improve the performance of their applications via an inexpensive I/O reduction software and avoid an expensive forklift upgrade to their compute, network or storage layers. 

Most Difficult to Support Applications

Below is a word cloud representing hundreds of answers to visually show the application environments IT Pros are having the most trouble to support from a performance standpoint. I think you can see the big ones that pop out!

The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

 

The Simple Software Answer

As much as organizations continue to reactively respond to performance challenges by purchasing expensive new server and storage hardware, our V-locity® I/O reduction software offers a far more efficient path by guaranteeing to solve the toughest application performance challenges on I/O intensive systems like MS-SQL. This means organizations are able to offload 50% of I/O traffic from storage that is nothing but mere noise chewing up IOPS and dampening performance. As soon as we open up 50% of that bandwidth to storage, sluggish performance disappears and now there’s far more storage IOPS to be used for other things.

In just 2 minutes, learn more about how V-locity I/O reduction software eliminates the two big I/O inefficiencies in a virtual environment 2-min Video: Condusiv® I/O Reduction Software Overview

Try it for yourself, download our free 30-day trial – no reboot required

 

This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance

by Dawn Richcreek 6. December 2018 05:04

Lawnswood School is one of the top performing educational institutions in the UK. Their IT environment supports a workload that is split between approximately 1200 students and 200 staff.  With 1400 people and various programs and files to support, they were in search of something to help extend the life of their hardware and increase performance. They turned to Condusiv®’s V-locity® and Diskeeper® to extend the life of their storage hardware and maintain performance for their students and staff.

"Condusiv's V-locity software eliminated almost 50% of all storage I/O requests from having to be dealt with by the disk storage (SAN) layer, and that meant that when replacing the old SAN storage with a 'like-for-like' HP MSA 2040, V-locity gave me the confidence to make the purchase without having to over-spend to over-provision the storage in order to cope with all the excess unnecessary storage I/O traffic that V-locity efficiently eliminates,"said Noel Reynolds, IT Manager at Lawnswood School. Before upgrading his SAN, Noel was able to extend the life of the HP MSA 2000 SAN for 8 years “thanks to Condusiv’s I/O reduction software”.

Knowing how well the IT environment was performing at Lawnswood School, another school reached out to Noel for help, as their IT environment was almost identical, but suffering from slow and sluggish performance. They also had three VMware hosts of the same specification, the older HP MSA 2000 SAN storage and workloads that were pretty much identical. Noel Reynolds noted that: "They were almost a 'clone' school."

He continued: "I did the usual checks to discover why it wasn't working well, such as upgrading the firmware, checking the disks for errors and found nothing wrong other than bad storage performance. After comparing the storage latency, I found that Lawnswood School's disk storage was 20 times faster, even though the hardware, software and workload types were pretty much identical."

"We identified six of the 'most hit' servers and installed Condusiv's software on them. Within 24 hours, we saw a 50% boost in performance. Visibly improved performance had been returned to the users, and this really helped the end user experience.

A great example of a real-world solution." Noel concluded

 

Read the full case study                        Download 30-day trial

Tags:

Defrag | Diskeeper | Disruption, Application Performance, IOPS | EHR | General | SAN | Success Stories | virtualization | V-Locity

Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI

by Dawn Richcreek 27. November 2018 05:27

When the New Mexico Mortgage Finance Authority decided to better support their users by moving away from using physical PCs and migrating to a virtual desktop infrastructure, the challenge was to ensure the fastest possible user experience from their Horizon View VDI implementation.

“Anytime an organization starts talking about VDI, the immediate concern in the IT shop is how well we will be able to support it from a performance standpoint to ensure a pristine end user experience. Although supported by EMC VNXe flash storage with high IOPS, one of our primary concerns had to do with Windows write inefficiencies that chews up a large percentage of flash IOPS unnecessarily. When you’re rolling out a VDI initiative, the one thing you can’t afford to waste is IOPS,” said Joseph Navarrete, CIO, MFA.

After Joseph turned to Condusiv’s “Set-It-and-Forget-It®” V-locity® I/O reduction software and bumped up the memory allocation for his VDI instances, V-locity was able to offload 40% of I/O from storage resulting in a much faster VDI experience to his users. When he demo’d V-locity on his MS-SQL server instances, V-locity eliminated 39% of his read I/O traffic from storage due to DRAM read caching and another 40% of write I/O operations by solving Windows write inefficiencies at the source.

After seeing the performance boost and increased efficiency to his hardware stack, Joseph ensured V-locity was running across all his systems like MS-Exchange, SharePoint, and more.

“With V-locity I/O reduction software running on our VDI instances, users no longer have to wait extra time. The same is now true for our other mission critical applications like MS-SQL. The dashboard within the V-locity UI provides all the necessary analytics about our environment and view into what the software is actually doing for us. The fact that all of this runs quietly in the background with near-zero overhead impact and no longer requires a reboot to install or upgrade makes the software truly “set and forget,” said Navarrete.

 

Read the full case study                        Download 30-day trial

RecentComments

Comment RSS

Month List

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar