Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

FAL Remediation and Improved Performance for MEDITECH

by Gary Quan 28. November 2016 12:11

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EMR systems.

Some refer to this problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has an upper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes Windows. So what can be done about it?

 

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest version of V-locity®  for virtual servers and Diskeeper® for physical servers. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reduce the FAL size. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use a proprietary method of defragmentation that keeps the FAL from growing in size. An industry first!

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important, patented technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For more information, see our MEDITECH Solution Brief

 

 

 

Tags:

Diskeeper | Disruption, Application Performance, IOPS | General | IntelliMemory | IntelliWrite | MEDITECH | V-Locity

Everything You Need to Know about SSDs and Fragmentation in 5 Minutes

by Howard Butler 17. November 2016 05:42

When reading articles, blogs, and forums posted by well-respected (or at least well intentioned people) on the subject of fragmentation and SSDs, many make statements about how (1) SSDs don’t fragment, or (2) there’s no moving parts, so no problem, or (3) an SSD is so fast, why bother? We all know and agree SSDs shouldn’t be “defragmented” since that shortens lifespan, so is there a problem after all?

The truth of the matter is that applications running on Windows do not talk directly to the storage device.  Data is referenced as an abstracted layer of logical clusters rather than physical track/sectors or specific NAND-flash memory cells.  Before a storage unit (HDD or SSD) can be recognized by Windows, a file system must be prepared for the volume.  This takes place when the volume is formatted and in most cases is set with a 4KB cluster size.  The cluster size is the smallest unit of space that can be allocated.  Too large of a cluster size results in wasted space due to over allocation for the actual data needed.  Too small of a cluster size causes many file extents or fragments.  After formatting is complete and when a volume is first written to, most all of the free space is in just one or two very large sections.  Over the course of time as files of various sizes are written, modified, re-written, copied, and deleted, the size of individual sections of free space as seen from the NTFS logical file system point of view becomes smaller and smaller.  I have seen both HDD and SSD storage devices with over 3 million free space extents.  Since Windows lacks file size intelligence when writing a file, it never chooses the best allocation at the logical layer, only the next available – even if the next available is 4KB. That means 128K worth of data could wind up with 32 extents or fragments, each being 4KB in size. Therefore SSDs do fragment at the logical Windows NTFS file system level.  This happens not as a function of the storage media, but of the design of the file system.

Let’s examine how this impacts performance.  Each extent of a file requires its own separate I/O request. In the example above, that means 32 I/O operations for a file that could have taken a single I/O if Windows was smarter about managing free space and finding the best logical clusters instead of the next available. Since I/O takes a measurable amount of time to complete, the issue we’re talking about here related to SSDs has to do with an I/O overhead issue.

Even with no moving parts and multi-channel I/O capability, the more I/O requests needed to complete a given workload, the longer it is going to take your SSD to access the data.  This performance loss occurs on initial file creation and carries forward with each subsequent read of the same data.  But wait… the performance loss doesn’t stop there.  Once data is written to a memory cell on an SSD and later the file space is marked for deletion, it must first be erased before new data can be written to that memory cell.  This is a rather time consuming process and individual memory cells cannot be individually erased, but instead a group of adjacent memory cells (referred to as a page) are processed together.  Unfortunately, some of those memory cells may still contain valuable data and this information must first be copied to a different set of memory cells before the memory cell page (group of memory cells) can be erased and made ready to accept the new data.  This is known as Write Amplification.  This is one of the reasons why writes are so much slower than reads on an SSD.  Another unique problem associated with SSDs is that each memory cell has a limited number of times that a memory cell can be written to before that memory cell is no longer usable.  When too many memory cells are considered invalid the whole unit becomes unusable.  While TRIM, wear leveling technologies, and garbage collection routines have been developed to help with this behavior, they are not able to run in real-time and therefore are only playing catch-up instead of being focused on the kind of preventative measures that are needed the most.  In fact, these advanced technologies offered by SSD manufacturers (and within Windows) do not prevent or reverse the effects of file and free space fragmentation at the NTFS file system level.

The only way to eliminate this surplus of small, tiny writes and reads that (1) chew up performance and (2) shorten lifespan from all the wear and tear is by taking a preventative approach that makes Windows “smarter” about how it writes files and manages free space, so more payload is delivered with every I/O operation. That’s exactly why more users run Condusiv’s Diskeeper® (for physical servers and workstations) or V-locity® (for virtual servers) on systems with SSD storage. For anyone who questions how much value this approach adds to their systems, the easiest way to find out is by downloading a free 30-day trial and watch the “time saved” dashboard for yourself. Since the fastest I/O is the one you don’t have to write, Condusiv software understands exactly how much time is saved by eliminating multiple, fractured writes with fewer, larger contiguous writes. It even has an additional feature to cache reads from idle, available DRAM (15X faster than SSD), which further offloads I/O bandwidth to SSD storage. Especially for businesses with many users accessing a multitude of applications across hundreds or thousands of servers, the time savings are enormous.

 

ATTO Benchmark Results with and without Diskeeper 16 running on a 120GB Samsung SSD Pro 840. The read data caching shows a 10X improvement in read performance.

First-ever “Time Saved” Dashboard = Holy Grail for ROI

by Brian Morin 2. November 2016 10:03

If you’ve ever wondered about the exact business value that Condusiv® I/O reduction software provides to your systems, the latest “time saved” reporting does exactly that.

Prior to V-locity® v6.2 for virtual servers and Diskeeper® 16 for physical servers and endpoints, customers would conduct expansive before/after tests to extract the intrinsic performance value, but struggled to extract the ongoing business benefit over time. This has been especially true during annual maintenance renewal cycles when key stakeholders need to be “re-sold” to allocate budget for ongoing maintenance, or push new licenses to new servers.

The number one request from customers has been to better understand the ongoing business benefit of I/O reduction in terms that are easily relatable to senior management and makes justifying the ROI painless. This “holy grail” search on part of our engineering team has led to the industry’s first-ever “time saved” dashboard for an I/O optimization software platform.

When Condusiv software proactively eliminates the surplus of small, fractured writes and reads and ensures more “payload” with every I/O operation, the net effect is fewer write and read operations for any given workload, which saves time. When Condusiv software caches hot reads within idle, available DRAM, the net effect is fewer reads traversing the full stack down to storage and back, which saves time.

In terms of benefits, the new dashboard shows:

    1. How many write I/Os are eliminated by ensuring large, clean, contiguous writes from Windows

    2. How many read I/Os are cached from idle DRAM

    3. What percentage of write and read traffic is offloaded from underlying SSD or HDD storage

    4. Most importantly – the dashboard relates I/O reduction to the business benefit of … “time saved”

This reporting approach makes the software fully transparent on the type of benefit being delivered to any individual system or groups of systems. Since the software itself sits within the Windows operating system, it is aware of latency to storage and understands just how much time is saved by serving an I/O from DRAM instead of the underlying SSD or HDD. And, most importantly, since the fastest I/O is the one you don’t have to write, Condusiv software understands how much time is saved by eliminating multiple small, fractured writes with fewer, larger contiguous writes.  

Have you ever wondered how much time V-locity will save a VDI deployment? Or an application supported by all-flash? Or a Hyperconverged environment? Rather than wonder, just install a 30-day version of the software and monitor the “time saved” dashboard to find out. Benefits are fully transparent and easily quantified.

Have you ever needed to justify Diskeeper’s endpoint solution across a fleet of corporate laptops with SSDs? Now you can see the “time saved” on individual systems or all systems and quantify the cost of labor against the number of hours that Diskeeper saved in I/O time across any time period. The “no brainer” benefit will be immediately obvious.

Customers will be pleasantly surprised to find out the latest dashboard doesn’t just show granular benefits but also granular performance metrics and other important information to assist with memory tuning. See the avg., min, and max of idle memory used for cache over any time period (even by the hour) to make quick assessments on which systems could use more memory to take better advantage of the caching engine for greater application performance. Customers have found if they can maintain at least 2GB used for cache, that's where they begin to get into the sweet spot of what the product can do. If even more can be maintained to establish a tier-0 cache strategy, performance rises even further. Systems with at least 4GB idle for cache will invariably serve 60% of reads or more. 

 

 

       Lou Goodreau, IT Manager, New England Fishery

      “32% of my write traffic has been eliminated and 64% of my read traffic has been cached within idle memory. This saved over 20 hours in I/O time after 24 days of testing!”

       David Bruce, Managing Partner, David Bruce & Associates

                                    “Over 50% of my reads are now served from DRAM and over 30% of write traffic has

                                   been eliminated by ensuring large, contiguous writes. Now everything is more

                                   responsive!"

 

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar