Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

How to Achieve 2X Faster MS-SQL Applications

by Brian Morin 8. November 2017 05:31

By following the best practices outlined here, we can virtually guarantee a 2X or faster boost in your MS-SQL performance with our I/O reduction software.

  1) Don’t just run our I/O reduction software on the SQL Server instances but also on the application servers that run on top of MS-SQL

- It’s not just SQL performance that needs improvement, but the associated application servers that communicate with SQL. Our software will eliminate a minimum of 30-40% of the I/O traffic from those systems.

  2) Run our I/O reduction software on all the non-SQL systems on the same host/hypervisor

- Sometimes a customer is only concerned with improving their SQL performance, so they only install our I/O reduction software on the SQL Server instances. Keep in mind, the other VMs on the same host/hypervisor are interfering with the performance of your SQL instances due to chatty I/O that is contending for the same storage resources. Our software eliminates a minimum of 30-40% of the I/O traffic from those systems that is completely unnecessary, so they don’t interfere with your SQL performance.

- Any customer that is on the core or host pricing model is able to deploy the software to an unlimited number of guest machines on the same host. If you are on per system pricing, consider migrating to a host model if your VM density is 7 or greater.

  3) Cap MS-SQL memory usage, leaving at least 8GB left over

- Perhaps the largest SQL inefficiency is related to how it uses memory. SQL is a memory hog. It takes everything you give it then does very little with it to actually boost performance, which is why customers see such big gains with our software when memory has been tuned properly. If SQL is left uncapped, our software will not see any memory available to be used for cache, so only our write optimization engine will be in effect. Moreover, most DB admins cap SQL, leaving 4GB for the OS to use according to Microsoft’s own best practice.

- However, when using our software, it is best to begin by capping SQL a little more aggressively by leaving 8GB. That will give plenty to the OS, and whatever is leftover as idle will be dynamically leveraged by our software for cache. If 4GB is available to be used as cache by our software, we commonly see customers achieve 50% cache hit rates. It doesn’t take much capacity for our software to drive big gains.

  4) Consider adding more memory to the SQL Server

- Some customers will add more memory then limit SQL memory usage to what it was using originally, which leaves the extra RAM left over for our software to use as cache. This also alleviates concerns about capping SQL aggressively if you feel that it may result in the application being memory starved. Our software can use up to 128GB of DRAM. Those customers who are generous in this approach on read-heavy applications get into otherworldly kind of gains far beyond 2X with >90% of I/O served from DRAM. Remember, DRAM is 15X faster than SSD and sits next to the CPU.

  5) Monitor the dashboard for a 50% reduction in I/O traffic to storage

- When our dashboard shows a 50% reduction in I/O to storage, that’s when you know you have properly tuned your system to be in the range of 2X faster gains to the user, barring any network congestion issues or delivery issues.

- As much as capping SQL at 8GB may be a good place to start, it may not always get you to the desired 50% I/O reduction number. Monitor the dashboard to see how much I/O is being offloaded and simply tweak memory usage by capping SQL a little more aggressively. If you feel you may be memory constrained already, then add a little more memory, so you can cap more aggressively. For every 1-2GB of memory added, another 10-25% of read traffic will be offloaded.

 

Not a customer yet? Download a free trial of Condusiv I/O reduction software and apply these best practice steps at www.condusiv.com/try

 

The Revolution of Our Technology

by Rick Cadruvi, Chief Architect 18. October 2017 12:38

I chose to use the word “Revolution” instead of “Evolution” because, with all due modesty, our patented technology has been more a series of leaps to stay ahead of performance-crushing bottlenecks. After all, our company purpose as stated by our Founder, Craig Jensen, is:

“The purpose of our company is to provide computer technology that enormously increases

the production and income of an area.”

We have always been about improving your production. We know your systems are not about having really cool hardware but rather about maximizing your organization’s production. Our passion has been about eliminating the stops, slows and stalls to your application performance and instead, to jack up that performance and give you headroom for expansion. Now, most of you know us by our reputation for Diskeeper®. What you probably don’t know about us is our leadership in system performance software.

We’ve been at this for 35 years with a laser focus. As an example, for years hard drives were the common storage technology and they were slow and limited in size, so we invented numerous File System Optimization technologies such as Defragmentation, I-FAAST®1 and Directory Consolidation to remove the barriers to getting at data quickly. As drive sizes grew, we added new technologies and jettisoned those that no longer gave bang for the buck. Technologies like InvisiTasking® were invented to help maximize overall system performance, while removing bottlenecks.

As SSDs began to emerge, we worked with several OEMs to take advantage of SSDs to dramatically reduce data access times as well as reducing the time it took to boot systems and resume from hibernate. We created technologies to improve SSD longevity and even worked with manufacturers on hybrid drives, providing hinting information, so their drive performance and endurance would be world class.

As storage arrays were emerging we created technologies to allow them to better utilize storage resources and pre-stage space for future use. We also created technologies targeting performance issues related to file system inefficiencies without negatively affecting storage array technologies like snapshots.

When virtualization was emerging, we could see the coming VM resource contention issues that would materialize. We used that insight to create file system optimization technologies to deal with those issues before anyone coined the phrase “I/O Blender Effect”.

We have been doing caching for a very long time2. We have always targeted removal of the I/Os that get in your applications path to data along with satisfying the data from cache that delivers performance improvements of 50-300% or more. Our goal was not caching your application specific data, but rather to make sure your application could access its data much faster. That’s why our unique caching technology has been used by leading OEMs.

Our RAM-based caching solutions include dynamic memory allocation schemes to use resources that would otherwise be idle to maximize overall system performance. When you need those resources, we give them back. When they are idle, we make use of them without your having to adjust anything for the best achievable performance. “Set It and Forget It®” is our trademark for good reason.

We know that staying ahead of the problems you face now, with a clear understanding of what will limit your production in 3 to 5 years, is the best way we can realize our company purpose and help you maximize your production and thus your profitability. We take seriously having a clear vision of where your problems are now and where they will be in the future. As new hardware and software technologies roll out, we will be there removing the new barriers to your performance then, just as we do now.

1. I-FAAST stands for Intelligent File Access Acceleration Sequencing Technology, a technology designed to take advantage of different performing regions on storage to allow your hottest data to be retrieved in the fastest time.

2. If I can personally brag, I’ve created numerous caching solutions over a period of 40 years.

Microsoft SQL Team Puts V-locity to the Test

by Brian Morin 15. September 2017 09:12

In a testament to Condusiv's longstanding 20+ year relationship with Microsoft® as a Gold Partner and provider of technologies to Microsoft over the years, Condusiv® became the first software vendor awarded the stringent certification of MS-SQL Server I/O Reliability joining a very short list containing the likes of Dell® / EMC®, IBM® and HPE®.

Microsoft developed the SQL Server I/O Reliability Program to ensure the reliability, integrity, and availability of vendor products with SQL Server. The program includes a set of requirements that, when complied with and approved by a Microsoft committee of engineers, ensure the product is fully reliable and highly available for SQL Server systems. The certification applies to SQL Server running on Windows Server 2008R2 and later (the most current 2016 release included).

V-locity® Certified for SQL I/O Reliability and Demonstrates Significant SQL Performance Gains

The program itself does not require performance characteristics of products, but it does require I/O testing to exhibit the reliability and integrity of the product. To that end, the full report links to a summary of before/after performance results from a HammerDB test (the preferred load test to measure MS-SQL performance) on Azure to demonstrate the gains of using V-locity I/O reduction software for SQL Server 2016 on Azure’s Windows Server 2016 Data Center Edition. While transactions per minute increased 28.5% and new orders per minute increased by 28.7%, gains were considered modest by Condusiv’s standards since only a limited amount memory was available to be leveraged by V-locity’s patented DRAM caching engine. The typical V-locity customer sees 50% or better performance improvement to SQL applications. The Azure test system configured by Microsoft did not boost available memory to showcase the full power of what V-locity can do with as little of 2-4GB of memory.

To read the full report CLICK HERE

 

Case Management Solutions Turns to V-locity I/O Reduction Software to Solve Slow MS-SQL Performance

by Brian Morin 27. July 2017 05:21

A little more than a year ago, Case Management Solutions reached out to Dealflow, a Condusiv® Authorized Reseller, about getting help in finding a solution for what had become a notoriously slow application sitting on MS-SQL supported by NAS storage.

“If a file was 50 pages long, I would sit and watch the page count loading all 50 pages before printing the report. Some of the files we process are more than 500 pages, so I think you can imagine the pain,” said Hal Brooks, Managing Partner, Case Management Solutions.

The problem wasn’t just the time it took to process files, but employees would sit and wait to log into the system and experience delays when going from page to page within the application. Hal reached out to Dealflow, a Condusiv authorized reseller, who specializes in building IT solutions to solve customer pain points.

“Case Management Solutions shared with us the pain they were experiencing and the need to find the most cost-effective solution possible. We quickly spotted some necessary server upgrades, but after having seen what V-locity® I/O reduction software had done for our other clients, we knew that was likely the only missing ingredient to tackle their performance issues,” said Lee Owens, VP of Sales, Dealflow.

“After we completed the server upgrades, we installed Condusiv’s V-locity I/O reduction software and Case Management Solutions saw exactly the kind of performance they were hoping to see. Even their backup times dropped,” said Owens.

“Query times improved by 4X. Employees no longer had to wait to login or process files, and no longer experienced delays when going from page to page within the app. Instead of watching all 50 pages count up before printing, it’s almost instantaneous,” said Brooks.

“From our perspective, when Condusiv says they guarantee to fix application performance issues on Windows servers, that’s exactly what they do. We can attest to everything they claim as being true. It has helped all of our customers remove sluggishness from their most important applications,” said Owens.

To read the full story on how V-locity I/O reduction software boosted their MS-SQL performance, read here: http://learn.condusiv.com/rs/246-QKS-770/images/CS-CaseManagement.pdf

 

Tags:

Application Performance | Performance | V-Locity

National Mortgage Lender Eliminates Sluggish MS-SQL Applications with V-locity I/O Reduction Software

by Brian Morin 25. July 2017 06:01

I first chatted with Chuck Keith (Dir of Infrastructure, Supreme Lending) a couple years ago at VMworld. They had a custom-built loan application that sat on MS-SQL that was the most mission critical application to the business. The loan officers relied on it daily to do their jobs effectively. One problem – it was slow. 

 “Our common workloads are supported by our older Dell Compellent arrays, but all of our MS-SQL workloads are supported by newer Nimble storage arrays. As great as Nimble performs from a ‘cost per performance’ standpoint, it simply wasn’t enough with the growth of data and users we had been experiencing. Loan officers were complaining about slow queries taking five to ten minutes to run reports, and up to five seconds to advance from screen to screen within the loan origination system,” said Chuck Keith, Director of Infrastructure, Supreme Lending.

“At the time, we thought our only solution was to invest hundreds of thousands of dollars into new all-flash arrays to get the performance we needed. No matter how well your business is doing, no one wants to have a million-dollar conversation that’s not in budget. We needed to see how we could maximize performance on the hardware we already had, which led to a conversation with Condusiv,” said Keith.

“We examined a couple server-side caching players including Pernix Data and Infinio, but since Condusiv’s V-locity® I/O reduction software does so much more than server-side caching, its proof of concept came out the clear winner on both performance and price. And we didn’t have to add any additional hardware, since we already had a good amount of DRAM,” said Keith.

Keith continued, “Right off the bat, MS-SQL reporting that used to take five to ten minutes dropped to 30 seconds with V-locity, and all user complaints about sluggishness disappeared. We couldn’t believe it,” said Keith. Moreover, the typical 30-second load times to log into the software package dropped to ten seconds – a 3X improvement. The five-second wait to advance from screen to screen also disappeared, so that users could load each screen almost instantly.

“Not only did V-locity solve our ‘death by a thousand cuts’ issue of excessively small, tiny writes and reads from Windows VMs, but their DRAM caching engine provided huge gains on top of that by offloading a good percentage of hot reads from our underlying Nimble hybrid storage. It made a huge difference and gave back a big chunk of IOPS to our Nimble storage to be used for other things. We saved hundreds of thousands of dollars by being able to squeeze significantly more performance from the hardware stack we already have.” said Keith.

To read the full story on how V-locity I/O reduction software boosted their MS-SQL performance, read here: http://learn.condusiv.com/rs/246-QKS-770/images/CS_SupremeLending.pdf

Tags:

Performance

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar