Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Fragmentation and Data Corruption

by Michael 31. March 2011 04:54

Diskeeper (data performance for physical systems) and V-locity (optimization for virtual systems) are designed to deliver performance, reliability, longer life and energy savings. Increased performance and saved energy from our software are relatively easy to empirically test and validate. Longer life is a matter of minimizing wear and tear on hard drives (MTTF) and providing an all around better experience for users so they can continue to be productive with aging equipment (rather than frequent hardware refreshes).

Reliability is far more difficult to pinpoint as the variables involved are difficult, if not impossible, to isolate in test cases. We have overwhelming anecdotal evidence from customers in surveys, studies, and success stories that application hangs, freezes, crashes, and the sort are all remedied or reduced with Diskeeper and/or V-locity.

However, there is a reliability "hard ceiling" in the NTFS file system; a point in which fragmentation/file attributes become so numerous reliability is jeopardized. In NTFS, files that hit the proverbial "fan", and spray out into hundreds of thousands and millions of fragments, result in a mess that is well... stinky.

In short, fragmentation can become so severe that it ultimately ends up in data loss/corruption. A Microsoft Knowledge Base article describes this phenomenon. I've posted it below for reference:

A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations.

In this scenario, you may experience one of the following issues:

When you try to copy a file to a new location, you receive the following error message:
In Windows Vista or in later versions of Windows
The requested operation could not be completed due to a file system limitation
In versions of Windows that are earlier than Windows Vista
insufficient system resources exist to complete the requested service
When you try to write to a sparse file from the Application log, Microsoft SQL Server may log an event that resembles the following:
In Windows Vista or in later versions of Windows
Event Type: Information

Description: ...
665(The requested operation could not be completed due to a file system limitation.) to SQL Server during write at 0x000024c8190000, in filename...
In versions of Windows that are earlier than Windows Vista
Event Type: Information

Description: ...
1450(Insufficient system resources exist to complete the requested service.) to SQL Server during write at 0x000024c8190000, in file with handle 0000000000000FE8 ...
When a file is very fragmented, NTFS uses more space to save the description of the allocations that is associated with the fragments. The allocation information is stored in one or more file records. When the allocation information is stored in multiple file records, another structure, known as the ATTRIBUTE_LIST, stores information about those file records. The number of ATTRIBUTE_LIST_ENTRY structures that the file can have is limited.

We cannot give an exact file size limit for a compressed or a highly fragmented file. An estimate would depend on using certain average sizes to describe the structures. These, in turn, determine how many structures fit in other structures. If the level of fragmentation is high, the limit is reached earlier. When this limit is reached, you receive the following error message:

Windows Vista or later versions of Windows:
STATUS_FILE_SYSTEM_LIMITATION The requested operation could not be completed due to a file system limitation

Versions of Windows that are earlier than Windows Vista:
STATUS_INSUFFICIENT_RESOURCES insufficient system resources exist to complete the requested service

Compressed files are more likely to reach the limit because of the way the files are stored on disk. Compressed files require more extents to describe their layout. Also, decompressing and compressing a file increases fragmentation significantly. The limit can be reached when write operations occur to an already compressed chunk location. The limit can also be reached by a sparse file. This size limit is usually between 40 gigabytes (GB) and 90 GB for a very fragmented file.  

For files that are not compressed or sparse, the problem can be lessened by running Disk Defragmenter. Running Disk Defragmenter will not resolve this problem for compressed or sparse files.


Defrag | Diskeeper | Success Stories | V-Locity

Finding Latencies in your VM/SAN Infrastructure

by Michael 30. March 2011 11:10

Okay, so you've bought, installed, connected, configured, and then tuned/optimized your new storage virtualization solution, but somehow there are still latencies with apps (e.g. SQL).

You've run the Storage Area Network (SAN) vendor utilities that:

  • did not see any contention on the disks in the RAID group(s). 
  • noted that the average I/O to physical disks did not exceed a reasonable number of I/O's per second on each volume in the meta device.
  • checked the utilization of the port that the Host Bus Adaptor (HBA) is zoned to and did not see any performance issues.
  • noted the switch port that the HBA is connected to is not saturated or reporting any errors.

And basically surmised "at this time we do not see any issue on the array or with the SAN in reference to this server."


When running PerfMon within Windows, it continues to uncover latencies in the 100ms+ range. What the hayel!

This is when it's important to consider what those SAN optimization and reporting tools are providing. SANs can optimize storage from HBA-to-spindle. Above the HBA other factors cause latencies outside the scope or control of the SAN, and ultimately it is the App/User Experience that needs to be addressed.

So, it's time to look further up the storage stack.

Here is a great chart (borrowed from VMware here):

The chart helps simplify that SAN and even VM based latency monitoring and storage optimization do not account for latencies that may exist in the Guest Operating System (GOS). They are only aware of, and able to optimize I/O from the point they receive the traffic to the physical storage.

Monitoring performance in Windows does not go away simply because you've left direct attached storage (DAS) and physical servers to go virtual. There are numerous causes for poor performance on the GOS side, from poorly written apps, to incorrect configurations, to bad partitioning strategies, file system fragmentation and more. Pretty much all the issues that could cause poor Windows I/O performance on physical servers with DAS, still exist.

It's important to continue to use GOS based solutions to determine application latency such as PerfMon, which can support counters for popular apps (like SQL).

To evaluate if file fragmentation is a potential cause, track these metrics with Perfmon. Fragmentation will show up in the logical disk statistics referred to in the document. You can also use a freeware solution from Diskeeper Corporation; called Disk Performance Analyzer for Networks (DPAN) to collect file fragmentation statistics from any Windows system (physical or virtual) on your LAN/WAN.  You can download DPAN here.

Sample DPAN Report:


Defrag | SAN

Best Practices for Storage Area Network (SAN) Defragmentation

by Michael 29. March 2011 02:30


As high performing storage solutions based on block protocols (e.g. iSCSI, FC), SANs excel at optimizing block access. SANs work at a storage layer underneath the operating systems file system; usually NTFS when discussing Microsoft Windows®. That dictates that a SAN is unaware of “file” fragmentation and unable to solve this issue.

Fig 1.0: Diagram of Disk I/O as it travels from Operating System to SAN LUN.

With file fragmentation causing the host operating system to generate additional unnecessary disk I/Os (more overhead on CPU and RAM) performance suffers. In most cases the randomness of I/O requests, due to fragmentation and concurrent data requests, the blocks that make up the file will be physically scattered in uneven stripes across a SAN LUN/aggregate. This causes even greater degradation in performance.

Fig 1.1: Sample Windows Performance Monitor Report from fragmented SAN-attached NTFS volume.

Fortunately there are simple solutions to NTFS file system fragmentation; fragmentation prevention and defragmentation. Both approaches solve file fragmentation at the source, the local disk file system.

IntelliWrite® “The only way to prevent fragmentation before it happens™”

IntelliWrite is an advanced file system driver that leverages and improves upon modern Windows’ file system “Best Fit” file write design in order to write a file in a non-fragmented state on the initial write. Intelligently writing contiguous files to the disk provides four principal benefits above and beyond defragmentation, including:

  • Prevents most fragmentation before it happens
  • Better file write performance
  • An energy friendly approach to improving performance, as defragmentation is not required for files handled by IntelliWrite
  • 100% compatibility with copy-on-write technologies used in advanced storage management solutions (e.g. snapshots)

While eliminating fragmentation improves performance. it is important to properly configure and account for advanced SAN features.

With the increasing popularity of SANs, we've included instructions in the Diskeeper installation to ensure users properly configure Diskeeper:

We suggest reading this full document before executing any of the recommended configurations. These instructions apply to V-locity (used on VMs as well).

Best Practices:


Implementing Diskeeper on a SAN is simple and straightforward. There are two principal concepts to ensuring proper configuration and optimal results:

  • Ensure IntelliWrite is enabled for all volumes.
  • Find a time to schedule Automatic Defragmentation (more details below)

If you are implementing any of the following SAN based technologies such as Thin Provisioning, Replication, Snapshots, Continuous Data Protection (CDP) or Deduplication, it is recommended to follow these guidelines.

Defragmentation can cause unwanted side effects when any of the above referenced technologies are employed. These side effects include:

With SAN replication:
Likelihood of additional data replication traffic.

With Snapshots/CDP:
Likelihood of additional storage requirements for data that defragmented/moved and snapshot-related performance lag.

With Thin Provisioning:
Likelihood of additional storage requirements for data that defragmented/moved.

With Deduplication:
Potential for additional deduplication overhead. Also note that deduplication can be used to remove duplicate blocks incorrectly allocated due to defragmentation. This process can therefore be used to reclaim over-provisioned space.

This is why it is important to enable the fragmentation prevention (IntelliWrite) and change the Automatic Defragmentation to occur during non-production periods to address the pre-existing fragmentation:

During Installation, disable Automatic Defragmentation;

Uncheck the “Enable Automatic Defragmentation” option during installation.

Upon installation ensure IntelliWrite is enabled on all volumes (default). IntelliWrite was specifically designed to be 100% compatible with all advanced SAN features, and should be enabled on all SAN LUNs. IntelliWrite configuration is enabled or disabled per volume, and can be used in conjunction with Automatic Defragmentation, or exclusively.

To ensure IntelliWrite is enabled, right click a volume(s) and select the feature.

Then confirm “Prevent Fragmentation on this volume” is selected, and click “OK” to complete.

Once installed, enable Automatic Defragmentation for any volumes that are not mapped to a SAN LUN. This may include the System Partition (e.g. C:\).

To enable Automatic Defragmentation, right click a volume(s) and select the feature.

Then check “Enable Automatic Defragmentation on the selected volumes” and click “OK” to complete.

If you are not using any advanced SAN features, it is recommended to enable Automatic Defragmentation for all days/times. However, note that pre-existing fragmentation will require significant effort from Diskeeper to clean up. This effort will generate disk I/O activity within the SAN.

Therefore, if existing fragmentation is significant, initially schedule Diskeeper to run during off-peak hours. As Diskeeper has robust scheduling capability, this is easily configured.

To enable Automatic Defragmentation during non-production periods, right click a volume(s) and select the feature.

Then check “Enable Automatic Defragmentation on the selected volumes”. Diskeeper is then scheduled by using your mouse to highlight over the 30 minute blocks in the interactive weekly calendar.

The above example disables defragmentation Monday through Friday. It also disables defragmentation Saturdays and Sundays except between 7pm until 3:30am the following morning. This would afford 17 hours of defragmentation availability per week. Immediately following these scheduled defragmentation periods is when SAN maintenance for advanced features should be addressed (e.g. thin reclamation, deduplication).

Should accommodating SAN maintenance be difficult (e.g. limited maintenance windows)using a weekly optimization process, very granular scheduling is also available with Diskeeper. Note, maintenance windows are not required in order to implement and benefit from IntelliWrite.

To schedule for specific non-reoccurring dates and times in the future, select the “Turn Automatic Defragmentation on or off based on specific dates” option. Click any multitude of dates and times using Shift-Select or Ctrl-Select. Once done, click OK to complete.

If you are implementing the above mentioned advanced technologies and your SAN provides hot block optimization / data tiering, it is also recommended to disable I-FAAST® (Intelligent File Access Acceleration Sequencing technology). I-FAAST sequences hot “files” (not blocks) in a Windows volume, after determining hardware performance characteristics. The sequencing process creates additional movement of data for those advanced SAN features, and is therefore generally recommended to disable when similar SAN solutions are in place.

To disable I-FAAST, right click a volume(s) and select the feature.

Note, I-FAAST requires Automatic Defragmentation be enabled. Also note that I-FAAST is disabled by default in Diskeeper 2011 in certain cases. Also note that I-FAAST generates additional disk I/Os and will therefore cause an increase in the aforementioned Automatic Defragmentation side effects.

Once pre-existing fragmentation has been removed, increase the periods in which Diskeeper actively optimizes the Windows file systems. With real-time defragmentation and InvisiTasking® technology, Diskeeper immediately cleans up fragmentation (that is not prevented by IntelliWrite). This minimal ongoing optimization generates only invisible, negligible I/O activity.

New features in Diskeeper 2011 to improve SAN performance:

Diskeeper 2011 introduces SAN specific solutions. These default solutions automate many of the configurations required for SAN-attached servers.

Diskeeper 2011’s new Instant Defrag™ technology dramatically minimizes I/O activity, and exponentially speeds up defragmentation. The Instant Defrag engine is provided fragmentation information, in real-time, by the IntelliWrite file system filter driver (those fragments that it does not prevent). Without the traditional need to run a time and resource intensive whole-volume fragmentation analysis, Instant Defrag can address the recently fragmented files as they occur. This dynamic approach prevents a buildup of fragmentation, which could incur additional I/O overhead to solve at a later date/time.

Diskeeper 2011’s new Efficiency Mode (default) maximizes performance, while minimizing disk I/O activity. By focusing on efficiency and performance and not on presenting a “pretty disk” visual display, Diskeeper 2011 minimizes negative side effects (e.g. reduce snapshot storage requirements or thin LUN growth, etc..) while maximizing performance benefits. It is a SAN-optimized defrag mode and our recommended solution for SAN-attached Windows volumes.

By default, Efficiency Mode also disables proprietary file placement features such as I-FAAST.

Also, by default, Diskeeper 2010/2011 moves data to lower NTFS clusters, and hence generally “forward” on SAN LUNs.

Best Practices Summary:
  • Ensure IntelliWrite is enabled for all volumes.
  • Automatic Defragmentation should be enabled at all times for all direct attached storage volumes.
  • Use Efficiency Mode of Diskeeper 2011.
  • Schedule Automatic Defragmentation on SAN LUNs, based on use of advanced SAN features.
  • Run SAN processes such as space reclamation and/or deduplication on recently defragmented LUNs using advanced SAN features.

Want this in PDF form. Get it here: Best Practices for using Diskeeper on Storage Area Networks.pdf (3.00 mb)

Tags: , , , , ,

Defrag | Diskeeper | SAN

Best Practices for CSV defrag in Hyper-V (Windows Server 2008R2)

by Michael 28. March 2011 04:33

One of the most significant features in Windows 2008R2 (for Hyper-V) is Cluster Shared Volumes (CSV) for virtual disks (vhd). This allows NTFS to behave similar to a clustered file system, addressing many limitations found in Hyper-V storage with the original release (Windows 2008).  

There are three online modes/states for CSV:
  • Direct Access: In this state, the CSV is available to all nodes in the cluster (i.e. all your VMs) for direct high performance storage access. This is the state you want in production.  
  • Redirected Access: In this state, the CSV is still available to all nodes in the cluster, but all I/O is redirected through a single "coordinator" node. Redirected access is used in planned situations where you need to perform certain disk actions that can't have multiple nodes accessing and locking files concurrently, such as a VSS backup or defrag. Channeling all I/O through a coordinator slows I/O and is more likely to cause bottlenecks for production demands.
  • Maintenance mode: enabling this mode is a safe means to get to a state where processes that require exclusive access to a volume can be used, such as a maintenance routine like chkdsk.

Best Practice: 

  • On the Hyper-V system volume,  pass-through volumes and any other non-CSV volumes, leave Automatic Defragmentation on at all times.
  • Given the performance benefits of Direct Access for cluster shared volumes, leave IntelliWrite on and run an occasional scheduled defrag. This is because of the requirement to use the coordinator node and place the volume into a Redirect Access state. Automatically changing from direct to redirect and back is all part of the file system control (kernel code we co-wrote with MS in the mid 90’s – as a Windows source code licensee), and the mechanism all defragmenters use today - you do not need to do anything special.
  • Correction (June 30, 2011): In the process of testing for the V-locity 3.0 release, we discovered that defagmentation does NOT cause a state change to Redirected Access. This is true for any defragmenter. So, defragment CSVs as you would any other volume. [Apologies on making this statement without validation - we should know better :-)] 

Diskeeper and V-locity are fully compatible with CSVs as confirmed by Windows IT Pro here. The file system control built into Windows is used to defrag, but not used for prevention in the design of IntelliWrite, which is a CSV-compatible file system filter driver (it's very important for drivers to be CSV-compatible) residing at a low altitude, expect on XP (where its altitude is much higher). You can view all file system minifilters and their allocated altitudes here.

IntelliWrite is “DKRtWrt” (its code names in development stages was WriteRight and then later RightWrite -hence "RtWrt"). To see or load/unload filter drivers, use the Filter Manager Control Program (fltmc):

Tags: , , , ,

Defrag | Hyper-V | IntelliWrite | V-Locity

Delivering System Efficiency on a Whole New Level

by Colleen Toumayan 22. March 2011 08:25

Diskeeper Corporation today announced the release of Diskeeper® 2011 data performance software which includes new technology to give optimum system performance always. Diskeeper Corporation has long been the leader in performance and reliability technologies and this new version marks another milestone in bringing together technology to take any Windows® system up to optimum performance.

“Having this instant defrag is shortening the time it takes to place the machines back in use. I have always been impressed with Diskeeper, but this version shows the dedication your company has with making each PC run at optimal speed.”

Efficient Mode new in Diskeeper 2011 minimizes the time and resources used by Diskeeper to restore and maintain peak performance and reliability. New Efficient Mode is smart enough to detect fragmentation that is a problem and targets it for priority handling. The software further contains two unique technologies closely married together to achieve the ultimate in system performance: IntelliWrite® technology to prevent up to 85% of fragmentation before it ever happens—an industry first—and New Instant Defrag™ which understands how files are used and immediately defrags the ones that will be used right now.

Brandon Butler of Professional Medical Services stated,” The instant defrag is a huge thing for us in this IT department.” And further said, “Having this instant defrag is shortening the time it takes to place the machines back in use. I have always been impressed with Diskeeper, but this version shows the dedication your company has with making each PC run at optimal speed.”

“I've used Diskeeper since the early 90's and with each version, the improvements have made my job much easier and have saved the county many thousands of dollars as our systems have lasted longer than the counties old and new rotation mandate for systems. The new feature, efficient mode, helps to keep the system in constant maintenance, which keeps it from being fragmented with sluggish responses. Benefits are savings in both personal time and cost of replacing a system that now can operate for even more years,” said Gary McDonald, San Joaquin County Public Health Services.

The need for more data and faster processing is growing enormously for everybody which is evident in new storage technologies. System resource wastes; such as disk space, I/O, resources, and unnecessary hardware expenditures also magnifies throughout networks, driving up operating costs. As the IT environment becomes more complex, performance bottlenecks in the underlying infrastructure magnify throughout a site and seriously impact productivity. With Diskeeper 2011, major improvement in system efficiency and a significant reduction in the entire site’s operating costs are achieved.

Features in Diskeeper 2011

Exclusive IntelliWrite technology. The O/S activity of writing files in pieces to the disk (fragmentation) can be prevented before it happens – up to 85% and more. Extremely valuable for thin provisioned storage and SAN as well as all other systems.

New Instant Defrag. Instant Defrag, new in Diskeeper 2011, works in conjunction with IntelliWrite to quickly eliminate any fragments not prevented during the initial write. For those fragments, IntelliWrite passes along information in real-time to the Instant Defrag engines for immediate handling.

New Efficient Mode. Efficient Mode, also new in Diskeeper 2011, uses the minimum disk Input/Output (I/O) to restore and maintain maximum performance. The Efficient Mode is smart enough to detect fragmentation that is a problem and targets it for priority handling. Efficient mode only addresses problem fragmentation and by eliminating the unnecessary extra effort to get to a state of 0 total fragments, peak performance is rapidly restored.

New Performance Report. The new performance report in Diskeeper 2011 overlays the main User Interface (UI) to provide the user with an instant view of gains they experience. Users will see:

  • System Configuration Status
  • Read and Write Access Time – % improvement
  • How much fragmentation was prevented and eliminated
  • Cumulative Number of I/Os Saved

Exclusive InvisiTasking® technology has been redesigned in Diskeeper 2011 to be more assertive in I/O active environments while still maintaining invisible processing. The enhancements will allow Diskeeper to accomplish more defragmentation and resolve it faster (e.g. Instant Defrag), during typical production workloads.

HyperFast® solid state drive optimizer: Included in the server editions and available as an add-on to Diskeeper Home, Professional and ProPremier editions. HyperFast® solid state drive optimizer is proven optimizing technology for Solid State Drives (SSDs), providing faster performance and longer lifespan.

Diskeeper 2011 takes enterprise network performance and efficiency far beyond what defrag-only can achieve. The advanced features in Diskeeper 2011 combined with IntelliWrite technology make previously unapproachable levels of system efficiency a reality. Diskeeper 2011 makes systems faster, more reliable, longer lived, and energy efficient.

Free 30 day trialware and further information at or call 800-829-6468. Diskeeper 2011 includes editions tailored from the enterprise to home environments.


Month List


<<  May 2017  >>

View posts in large calendar