Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

I’m a MEDITECH Hospital with SSDs, Is FAL Growth Still an Issue that Risks Downtime?

by Brian Morin 4. December 2017 07:34

Now that many MEDITECH hospitals have gone all-flash for their backend storage, one of the most common questions we field is whether or not there is still downtime risk from the File Attribute List (FAL) growth issue if the data physically lives on solid-state drives (SSDs).

The main reason this question comes up is because MEDITECH requires “defragmentation,” which most admins insinuate as only being a requirement for a spinning disk backend. That misnomer couldn’t be further from the truth as the FAL issue has nothing to do with the backend media but rather the file system itself. Clearly, defragmentation processes are damaging to solid-state media, which is why MEDITECH hospitals turn to Condusiv’s V-locity® I/O reduction software that prevents fragmentation from occurring in the first place and has special engines designed for MEDITECH environments to remediate the FAL from reaching its size limit and causing unscheduled downtime.

The File Attribute List is a Windows NTFS file metadata structure referred to as the FAL. ThFAstructure capointdifferentypeofilattributessucasecuritattributeostandarinformatiosuch acreatioanmodificatiodateandmosimportantlythactuadatcontainewith ithfileFoexamplethFAkeeps tracowheralthdatifothfileThFAactuallcontains pointers to file records thaindicatthlocatioothfildatothvolumeIthadathatbe storeadifferenlogic allocationothvolume (i.e.fragmentation), morpointerarrequired. This iturincreasethsizothFALHereiliethproblemthFAL sizhaauppelimitatioo256KB which is comprised of 8192 attribute entriesWhethalimiireachednmorpointercan be added, whicmeans Nmore datcan baddetthfileAnd, iit is a foldefile whickeeps tracoalthfiletharesidundethafolderNmorfilecabaddeundethafoldefile. Once this occurs, the application crashes, leading to a best case scenario of several hours of unscheduled downtime to resolve.

Although this blog points out MEDITECH customers experiencing this issue, we have seen this FAL problem occur within non-MEDITECH environments like MS-Exchange and MS-SQL, with varying types of backend storage media from HDDs to all-flash arrays. So, what can be done about it?

The logical solution would be–why not just defragment the volume? Wouldn’t that decrease the number of pointers and decrease the FAL size? The problem is that traditional defragmentation actually causes the FAL to grow in size! While it can decrease the number of pointers, it will not decrease the FAL size, but in fact, it can cause the FAL size to groeven larger, making the problem worse even though you are attempting to remediate it.

The only proprietary solution to solve this problem is by using Condusiv’s V-locity® for virtual servers or Diskeeper® Server for physical servers. Included is a special technology called MediWrite®, which helps suppress this issue from occurring in the first place and provides special handling if it has already occurred. MediWrite includes:

>Unique FAL handling: As indicated above,traditional methods of defragmentation will cause the FAL to groeven further in size. MediWrite will detect when files have FAL size issues and will use proprietary methods to prevent FAL growth. This is the only engine of its kind in the industry.

>Unique FAL safe file movement:  V-locity and Diskeeper’s free space consolidation engines automatically detect FAL size issues and automatically deploy the MediWrite feature to resolve.

>Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing new fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

>Unique Offline FAL Consolidation tool: Any MEDITECH hospital that already has an FAL issue can use the embedded offline tool to shrink the FAL-IN-USE size in a very short time (~5 min) as opposed to manual processes that take several hours.

>V-locity and Diskeeper have been endorsed by MEDITECH. Click Here to view.

 

 

How to Achieve 2X Faster MS-SQL Applications

by Brian Morin 8. November 2017 05:31

By following the best practices outlined here, we can virtually guarantee a 2X or faster boost in your MS-SQL performance with our I/O reduction software.

  1) Don’t just run our I/O reduction software on the SQL Server instances but also on the application servers that run on top of MS-SQL

- It’s not just SQL performance that needs improvement, but the associated application servers that communicate with SQL. Our software will eliminate a minimum of 30-40% of the I/O traffic from those systems.

  2) Run our I/O reduction software on all the non-SQL systems on the same host/hypervisor

- Sometimes a customer is only concerned with improving their SQL performance, so they only install our I/O reduction software on the SQL Server instances. Keep in mind, the other VMs on the same host/hypervisor are interfering with the performance of your SQL instances due to chatty I/O that is contending for the same storage resources. Our software eliminates a minimum of 30-40% of the I/O traffic from those systems that is completely unnecessary, so they don’t interfere with your SQL performance.

- Any customer that is on the core or host pricing model is able to deploy the software to an unlimited number of guest machines on the same host. If you are on per system pricing, consider migrating to a host model if your VM density is 7 or greater.

  3) Cap MS-SQL memory usage, leaving at least 8GB left over

- Perhaps the largest SQL inefficiency is related to how it uses memory. SQL is a memory hog. It takes everything you give it then does very little with it to actually boost performance, which is why customers see such big gains with our software when memory has been tuned properly. If SQL is left uncapped, our software will not see any memory available to be used for cache, so only our write optimization engine will be in effect. Moreover, most DB admins cap SQL, leaving 4GB for the OS to use according to Microsoft’s own best practice.

- However, when using our software, it is best to begin by capping SQL a little more aggressively by leaving 8GB. That will give plenty to the OS, and whatever is leftover as idle will be dynamically leveraged by our software for cache. If 4GB is available to be used as cache by our software, we commonly see customers achieve 50% cache hit rates. It doesn’t take much capacity for our software to drive big gains.

  4) Consider adding more memory to the SQL Server

- Some customers will add more memory then limit SQL memory usage to what it was using originally, which leaves the extra RAM left over for our software to use as cache. This also alleviates concerns about capping SQL aggressively if you feel that it may result in the application being memory starved. Our software can use up to 128GB of DRAM. Those customers who are generous in this approach on read-heavy applications get into otherworldly kind of gains far beyond 2X with >90% of I/O served from DRAM. Remember, DRAM is 15X faster than SSD and sits next to the CPU.

  5) Monitor the dashboard for a 50% reduction in I/O traffic to storage

- When our dashboard shows a 50% reduction in I/O to storage, that’s when you know you have properly tuned your system to be in the range of 2X faster gains to the user, barring any network congestion issues or delivery issues.

- As much as capping SQL at 8GB may be a good place to start, it may not always get you to the desired 50% I/O reduction number. Monitor the dashboard to see how much I/O is being offloaded and simply tweak memory usage by capping SQL a little more aggressively. If you feel you may be memory constrained already, then add a little more memory, so you can cap more aggressively. For every 1-2GB of memory added, another 10-25% of read traffic will be offloaded.

 

Not a customer yet? Download a free trial of Condusiv I/O reduction software and apply these best practice steps at www.condusiv.com/try

 

The Tsunami of Data is Swamping IT

by Jim D’Arezzo, CEO 29. September 2017 09:53

There’s a storm brewing and chances are, it’s going to hit your data center or cloud site.

It’s the tsunami of data that is washing over the IT world and it is just going to get worse. That’s because there is an insatiable demand for data: big data, ERP, CRM, BI, EHR, and of course all of that social media and video that is flooding global organizations like a bursting dam.

I’ll admit it, I’m a dinosaur. I got started in the IT industry in the late 1970’s at IBM. So I can rightly say that I’ve seen a lot in the past 40 years. When I started, it was the heyday of the mainframe era. Data processing was still practically a priesthood back then. It wasn’t for everyone. Bill Gates would not proclaim “Information at your fingertips” for at least another dozen years.

Clearly, we’ve come a long way in four decades. The three driving factors – exponentially increased compute power, nearly unlimited storage, and the internet have delivered computing nirvana. Or so it would seem. But with abundance comes challenges. Frankly, we are now awash in data. The tsunami of available information is swamping IT environments worldwide. And even with huge advances in the three driving factors (compute, storage and the internet) IT is experiencing performance bottlenecks on an increasing basis. Recently, we conducted a survey of 1400 IT professionals and fully 27% said they are experiencing performance problems that are causing user complaints and slowdowns.

As usual, our industry is innovating to keep up. Cloud based computing that takes advantage of huge data centers as well as innovation in storage, compute power and connectivity has kept up with much of the demand. However, most of these improvements focus on hardware – all flash storage arrays; 64-core servers; hyper-convergence. But there’s one other performance innovation that is often overlooked: Software.

Software innovation that takes advantage of existing and new hardware capabilities can significantly increase performance without the cost of additional hardware. Even with an all flash storage back-end, software like ours can significantly increase performance and extend the life of the hardware. The cost-benefit can be tremendous. Imagine getting a 50% boost in performance (or even 2-3X) without buying a single additional piece of hardware. CEOs and CFOs love that kind of benefit.  I know, because that is what we hear from our customers every day.

So, before you get swamped by the tsunami of data that’s lapping at your data center, consider a software solution to the problem. It can help you stay afloat without having to buy a new yacht!

 

Tags: , , , , , , , ,

Application Performance | Big Data | Cloud

Microsoft SQL Team Puts V-locity to the Test

by Brian Morin 15. September 2017 09:12

In a testament to Condusiv's longstanding 20+ year relationship with Microsoft® as a Gold Partner and provider of technologies to Microsoft over the years, Condusiv® became the first software vendor awarded the stringent certification of MS-SQL Server I/O Reliability joining a very short list containing the likes of Dell® / EMC®, IBM® and HPE®.

Microsoft developed the SQL Server I/O Reliability Program to ensure the reliability, integrity, and availability of vendor products with SQL Server. The program includes a set of requirements that, when complied with and approved by a Microsoft committee of engineers, ensure the product is fully reliable and highly available for SQL Server systems. The certification applies to SQL Server running on Windows Server 2008R2 and later (the most current 2016 release included).

V-locity® Certified for SQL I/O Reliability and Demonstrates Significant SQL Performance Gains

The program itself does not require performance characteristics of products, but it does require I/O testing to exhibit the reliability and integrity of the product. To that end, the full report links to a summary of before/after performance results from a HammerDB test (the preferred load test to measure MS-SQL performance) on Azure to demonstrate the gains of using V-locity I/O reduction software for SQL Server 2016 on Azure’s Windows Server 2016 Data Center Edition. While transactions per minute increased 28.5% and new orders per minute increased by 28.7%, gains were considered modest by Condusiv’s standards since only a limited amount memory was available to be leveraged by V-locity’s patented DRAM caching engine. The typical V-locity customer sees 50% or better performance improvement to SQL applications. The Azure test system configured by Microsoft did not boost available memory to showcase the full power of what V-locity can do with as little of 2-4GB of memory.

To read the full report CLICK HERE

 

Case Management Solutions Turns to V-locity I/O Reduction Software to Solve Slow MS-SQL Performance

by Brian Morin 27. July 2017 05:21

A little more than a year ago, Case Management Solutions reached out to Dealflow, a Condusiv® Authorized Reseller, about getting help in finding a solution for what had become a notoriously slow application sitting on MS-SQL supported by NAS storage.

“If a file was 50 pages long, I would sit and watch the page count loading all 50 pages before printing the report. Some of the files we process are more than 500 pages, so I think you can imagine the pain,” said Hal Brooks, Managing Partner, Case Management Solutions.

The problem wasn’t just the time it took to process files, but employees would sit and wait to log into the system and experience delays when going from page to page within the application. Hal reached out to Dealflow, a Condusiv authorized reseller, who specializes in building IT solutions to solve customer pain points.

“Case Management Solutions shared with us the pain they were experiencing and the need to find the most cost-effective solution possible. We quickly spotted some necessary server upgrades, but after having seen what V-locity® I/O reduction software had done for our other clients, we knew that was likely the only missing ingredient to tackle their performance issues,” said Lee Owens, VP of Sales, Dealflow.

“After we completed the server upgrades, we installed Condusiv’s V-locity I/O reduction software and Case Management Solutions saw exactly the kind of performance they were hoping to see. Even their backup times dropped,” said Owens.

“Query times improved by 4X. Employees no longer had to wait to login or process files, and no longer experienced delays when going from page to page within the app. Instead of watching all 50 pages count up before printing, it’s almost instantaneous,” said Brooks.

“From our perspective, when Condusiv says they guarantee to fix application performance issues on Windows servers, that’s exactly what they do. We can attest to everything they claim as being true. It has helped all of our customers remove sluggishness from their most important applications,” said Owens.

To read the full story on how V-locity I/O reduction software boosted their MS-SQL performance, read here: http://learn.condusiv.com/rs/246-QKS-770/images/CS-CaseManagement.pdf

 

Tags:

Application Performance | Performance | V-Locity

Month List

Calendar

<<  December 2017  >>
MoTuWeThFrSaSu
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar