Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

I’m a MEDITECH Hospital with SSDs, Is FAL Growth Still an Issue that Risks Downtime?

by Brian Morin 4. December 2017 07:34

Now that many MEDITECH hospitals have gone all-flash for their backend storage, one of the most common questions we field is whether or not there is still downtime risk from the File Attribute List (FAL) growth issue if the data physically lives on solid-state drives (SSDs).

The main reason this question comes up is because MEDITECH requires “defragmentation,” which most admins insinuate as only being a requirement for a spinning disk backend. That misnomer couldn’t be further from the truth as the FAL issue has nothing to do with the backend media but rather the file system itself. Clearly, defragmentation processes are damaging to solid-state media, which is why MEDITECH hospitals turn to Condusiv’s V-locity® I/O reduction software that prevents fragmentation from occurring in the first place and has special engines designed for MEDITECH environments to remediate the FAL from reaching its size limit and causing unscheduled downtime.

The File Attribute List is a Windows NTFS file metadata structure referred to as the FAL. ThFAstructure capointdifferentypeofilattributessucasecuritattributeostandarinformatiosuch acreatioanmodificatiodateandmosimportantlythactuadatcontainewith ithfileFoexamplethFAkeeps tracowheralthdatifothfileThFAactuallcontains pointers to file records thaindicatthlocatioothfildatothvolumeIthadathatbe storeadifferenlogic allocationothvolume (i.e.fragmentation), morpointerarrequired. This iturincreasethsizothFALHereiliethproblemthFAL sizhaauppelimitatioo256KB which is comprised of 8192 attribute entriesWhethalimiireachednmorpointercan be added, whicmeans Nmore datcan baddetthfileAnd, iit is a foldefile whickeeps tracoalthfiletharesidundethafolderNmorfilecabaddeundethafoldefile. Once this occurs, the application crashes, leading to a best case scenario of several hours of unscheduled downtime to resolve.

Although this blog points out MEDITECH customers experiencing this issue, we have seen this FAL problem occur within non-MEDITECH environments like MS-Exchange and MS-SQL, with varying types of backend storage media from HDDs to all-flash arrays. So, what can be done about it?

The logical solution would be–why not just defragment the volume? Wouldn’t that decrease the number of pointers and decrease the FAL size? The problem is that traditional defragmentation actually causes the FAL to grow in size! While it can decrease the number of pointers, it will not decrease the FAL size, but in fact, it can cause the FAL size to groeven larger, making the problem worse even though you are attempting to remediate it.

The only proprietary solution to solve this problem is by using Condusiv’s V-locity® for virtual servers or Diskeeper® Server for physical servers. Included is a special technology called MediWrite®, which helps suppress this issue from occurring in the first place and provides special handling if it has already occurred. MediWrite includes:

>Unique FAL handling: As indicated above,traditional methods of defragmentation will cause the FAL to groeven further in size. MediWrite will detect when files have FAL size issues and will use proprietary methods to prevent FAL growth. This is the only engine of its kind in the industry.

>Unique FAL safe file movement:  V-locity and Diskeeper’s free space consolidation engines automatically detect FAL size issues and automatically deploy the MediWrite feature to resolve.

>Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing new fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

>Unique Offline FAL Consolidation tool: Any MEDITECH hospital that already has an FAL issue can use the embedded offline tool to shrink the FAL-IN-USE size in a very short time (~5 min) as opposed to manual processes that take several hours.

>V-locity and Diskeeper have been endorsed by MEDITECH. Click Here to view.

 

 

How to Achieve 2X Faster MS-SQL Applications

by Brian Morin 8. November 2017 05:31

By following the best practices outlined here, we can virtually guarantee a 2X or faster boost in your MS-SQL performance with our I/O reduction software.

  1) Don’t just run our I/O reduction software on the SQL Server instances but also on the application servers that run on top of MS-SQL

- It’s not just SQL performance that needs improvement, but the associated application servers that communicate with SQL. Our software will eliminate a minimum of 30-40% of the I/O traffic from those systems.

  2) Run our I/O reduction software on all the non-SQL systems on the same host/hypervisor

- Sometimes a customer is only concerned with improving their SQL performance, so they only install our I/O reduction software on the SQL Server instances. Keep in mind, the other VMs on the same host/hypervisor are interfering with the performance of your SQL instances due to chatty I/O that is contending for the same storage resources. Our software eliminates a minimum of 30-40% of the I/O traffic from those systems that is completely unnecessary, so they don’t interfere with your SQL performance.

- Any customer that is on the core or host pricing model is able to deploy the software to an unlimited number of guest machines on the same host. If you are on per system pricing, consider migrating to a host model if your VM density is 7 or greater.

  3) Cap MS-SQL memory usage, leaving at least 8GB left over

- Perhaps the largest SQL inefficiency is related to how it uses memory. SQL is a memory hog. It takes everything you give it then does very little with it to actually boost performance, which is why customers see such big gains with our software when memory has been tuned properly. If SQL is left uncapped, our software will not see any memory available to be used for cache, so only our write optimization engine will be in effect. Moreover, most DB admins cap SQL, leaving 4GB for the OS to use according to Microsoft’s own best practice.

- However, when using our software, it is best to begin by capping SQL a little more aggressively by leaving 8GB. That will give plenty to the OS, and whatever is leftover as idle will be dynamically leveraged by our software for cache. If 4GB is available to be used as cache by our software, we commonly see customers achieve 50% cache hit rates. It doesn’t take much capacity for our software to drive big gains.

  4) Consider adding more memory to the SQL Server

- Some customers will add more memory then limit SQL memory usage to what it was using originally, which leaves the extra RAM left over for our software to use as cache. This also alleviates concerns about capping SQL aggressively if you feel that it may result in the application being memory starved. Our software can use up to 128GB of DRAM. Those customers who are generous in this approach on read-heavy applications get into otherworldly kind of gains far beyond 2X with >90% of I/O served from DRAM. Remember, DRAM is 15X faster than SSD and sits next to the CPU.

  5) Monitor the dashboard for a 50% reduction in I/O traffic to storage

- When our dashboard shows a 50% reduction in I/O to storage, that’s when you know you have properly tuned your system to be in the range of 2X faster gains to the user, barring any network congestion issues or delivery issues.

- As much as capping SQL at 8GB may be a good place to start, it may not always get you to the desired 50% I/O reduction number. Monitor the dashboard to see how much I/O is being offloaded and simply tweak memory usage by capping SQL a little more aggressively. If you feel you may be memory constrained already, then add a little more memory, so you can cap more aggressively. For every 1-2GB of memory added, another 10-25% of read traffic will be offloaded.

 

Not a customer yet? Download a free trial of Condusiv I/O reduction software and apply these best practice steps at www.condusiv.com/try

 

How to Recover Lost or Deleted Files BEFORE Resorting to Outsourced Data Recovery

by Gary Quan 1. November 2017 05:46

Here’s a nightmare scenario…a user accidentally deletes irreplaceable or valued files from a network share, and there is no way to recover the data because:

>The file was created or modified then deleted AFTER the last valid backup/snapshot was taken.

>There is NO valid backup or snapshot to recover the data.

>There was NO real-time recovery software like Condusiv’s Undelete® already installed on the file server

>Sending the disk to a professional data recovery center is COSTLY and TIME-CONSUMING.

What do you do? Well, you may be in luck with a little known feature in Condusiv’s Undelete software product known as “Emergency Undelete.” On NTFS (New Technology File system) formatted volumes, which is the default file system used by Windows, there is an unfamiliar characteristic that can be leveraged to recover your lost data.

When a file gets deleted from a Windows volume, the data has not yet been physically removed from the drive. The space where that file data was residing is merely marked as “deleted” or available for use. The original data is there and will remain there until that space is overwritten by new data. That may or may not happen for quite a while. By taking the correct steps, there is an extremely good chance that this ‘deleted’ file can still be recovered. This is where Emergency Undelete comes in.

Emergency Undelete can find deleted files that have not yet been over-written by other files and allow you to recover them. To increase your chances of recovering lost data, here are some best practices to follow as soon as the files have been accidentally deleted.

1. Immediately, reduce or do away with any write activity on the volume(s) you are trying to recover the deleted files from. This will improve your chances of recovering the deleted files.

2. Get Condusiv’s Undelete to leverage its Emergency Undelete feature.  Emergency Undelete is part of the Undelete product package.

3. REMEMBER: You want to prevent any write activity on the volume(s) you are trying to recover the deleted files from, so if you are trying to recover lost files from your system volume, then do one of the following:

a. Copy the Undelete product package to that system, but to a different volume than the one you are recovering lost files from. Run the Undelete install package and it will allow you to run Emergency Undelete directly to recover the lost files.

  

b. If you do not have an extra volume on that system, then place the Undelete product package on a different system, run it and Emergency Undelete will allow you to place the Emergency Undelete package onto a CD or a USB memory stick. You can then place the CD/Memory stick on the system you need to recover from and run it to recover the lost files.

 

 

Now if the lost files do not reside on the system volume, you can just place the Undelete product package on the system volume, run it and select to run Emergency Undelete directly to recover the lost files.

4. When recovering the lost files, recover them to a different volume.

These same steps will also work on FAT (File Allocation Table) formatted storage that is used in many of the memory cards in cameras and phones. So, if some irreplaceable photos or videos were accidentally deleted, you can use these same steps to recover these too. Insert the memory card onto your Windows system, then use Emergency Undelete to recover the lost photos. 

Emergency Undelete has saved highly valuable Microsoft Office documents and priceless photos for thousands of users. It can help in your next emergency, too.

 

Tags:

Data Protection | Data Recovery | Undelete

New Dashboard Finally Answers the Big Question

by Brian Morin 25. October 2017 04:38

After surveying thousands of IT professionals, we’ve found that the vast majority agree that Windows performance degrades over time – they just don’t agree on how much. Unbeknown to most is what the problem actually is, which is I/O degradation as the size of writes and reads become excessively smaller than they should. This inefficiency is akin to moving a gallon of water across a room with dixie cups instead of a single gallon jug. Even if you have all-flash storage and can move those dixie cups quickly, you are still not processing data nearly as fast as you could.

In the same surveys, we’ve also found that the vast majority of IT professionals are aware of the performance penalty of the “I/O blender” effect in a virtual environment, which is the mixing and randomizing of I/O streams from the disparate virtual machines on the same host. What they don’t agree on is how much. And, they are not aware of how the issue is compounded by Windows write inefficiencies.

Now that the latest Condusiv in-product dashboard has been deployed across thousands of customer systems who have upgraded their Condusiv I/O reduction software to the latest version, customers are getting their first-ever granular view into what I/O reduction software is doing for their systems in terms of seeing the exact percentage and number of read and write I/O operations eliminated from storage and how much I/O time that saves any given system or group of systems. Ultimately, it’s a picture into the size of the problem – all the I/O traffic that is mere noise – all the unnecessary I/O that dampens system performance.

In our surveys, we found IT professionals all over the map on the size of the performance penalty from inefficiencies. Some are quite positive the performance penalty is no more than 10%. More put that range at 20%. Most put it at 30%. Then it dips back down with fewer believing a 40% penalty with the fewest throwing the dart at 50%.

As it turns out, our latest version has been able to drop a pin on that.

There are variances that move the extent of the penalty on any given workload such as system configuration and/or workload behavior. Some systems might be memory constrained, some workloads might be too light to matter, etc.

However, after thousands of installs over the last several months, we see a very consistent range on the vast majority of systems in which 30-40% of all I/O traffic is being offloaded from underlying storage with our software. Not only does that represent an immediate performance boost for users, but it also means 30-40% of I/O headroom is handed back to the storage subsystem that can now use those IOPS for other things.

The biggest factor to consider is the 30-40% improvement number represents systems where memory has not been increased beyond the typical configuration that most administrators use. Customers who offload 50% or more of I/O traffic from storage are the ones with read heavy workloads who beef up memory server-side to get more from the software. For every additional 1-2GB of memory added, another 10-25% of read traffic is offloaded. Some customers are more aggressive and leverage as much memory as possible server-side to offload 90% or more I/O traffic on read-heavy applications.

As expensive as new all-flash systems are, how much sense does it make to pay for all those IOPS only to allow 30-40% of those IOPS to be chewed up by unnecessary, noisy I/O? By addressing the two biggest penalties that dampen performance (Windows write inefficiencies compounded by the “I/O blender” effect), Condusiv I/O reduction software ensures optimal performance and protects the CapEx investments made into servers and storage by extending their useful life.

Tags:

Disruption, Application Performance, IOPS | virtualization | V-Locity

The Revolution of Our Technology

by Rick Cadruvi, Chief Architect 18. October 2017 12:38

I chose to use the word “Revolution” instead of “Evolution” because, with all due modesty, our patented technology has been more a series of leaps to stay ahead of performance-crushing bottlenecks. After all, our company purpose as stated by our Founder, Craig Jensen, is:

“The purpose of our company is to provide computer technology that enormously increases

the production and income of an area.”

We have always been about improving your production. We know your systems are not about having really cool hardware but rather about maximizing your organization’s production. Our passion has been about eliminating the stops, slows and stalls to your application performance and instead, to jack up that performance and give you headroom for expansion. Now, most of you know us by our reputation for Diskeeper®. What you probably don’t know about us is our leadership in system performance software.

We’ve been at this for 35 years with a laser focus. As an example, for years hard drives were the common storage technology and they were slow and limited in size, so we invented numerous File System Optimization technologies such as Defragmentation, I-FAAST®1 and Directory Consolidation to remove the barriers to getting at data quickly. As drive sizes grew, we added new technologies and jettisoned those that no longer gave bang for the buck. Technologies like InvisiTasking® were invented to help maximize overall system performance, while removing bottlenecks.

As SSDs began to emerge, we worked with several OEMs to take advantage of SSDs to dramatically reduce data access times as well as reducing the time it took to boot systems and resume from hibernate. We created technologies to improve SSD longevity and even worked with manufacturers on hybrid drives, providing hinting information, so their drive performance and endurance would be world class.

As storage arrays were emerging we created technologies to allow them to better utilize storage resources and pre-stage space for future use. We also created technologies targeting performance issues related to file system inefficiencies without negatively affecting storage array technologies like snapshots.

When virtualization was emerging, we could see the coming VM resource contention issues that would materialize. We used that insight to create file system optimization technologies to deal with those issues before anyone coined the phrase “I/O Blender Effect”.

We have been doing caching for a very long time2. We have always targeted removal of the I/Os that get in your applications path to data along with satisfying the data from cache that delivers performance improvements of 50-300% or more. Our goal was not caching your application specific data, but rather to make sure your application could access its data much faster. That’s why our unique caching technology has been used by leading OEMs.

Our RAM-based caching solutions include dynamic memory allocation schemes to use resources that would otherwise be idle to maximize overall system performance. When you need those resources, we give them back. When they are idle, we make use of them without your having to adjust anything for the best achievable performance. “Set It and Forget It®” is our trademark for good reason.

We know that staying ahead of the problems you face now, with a clear understanding of what will limit your production in 3 to 5 years, is the best way we can realize our company purpose and help you maximize your production and thus your profitability. We take seriously having a clear vision of where your problems are now and where they will be in the future. As new hardware and software technologies roll out, we will be there removing the new barriers to your performance then, just as we do now.

1. I-FAAST stands for Intelligent File Access Acceleration Sequencing Technology, a technology designed to take advantage of different performing regions on storage to allow your hottest data to be retrieved in the fastest time.

2. If I can personally brag, I’ve created numerous caching solutions over a period of 40 years.

Month List

Calendar

<<  December 2017  >>
MoTuWeThFrSaSu
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar