Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Best Practices for CSV defrag in Hyper-V (Windows Server 2008R2)

by Michael 28. March 2011 04:33

One of the most significant features in Windows 2008R2 (for Hyper-V) is Cluster Shared Volumes (CSV) for virtual disks (vhd). This allows NTFS to behave similar to a clustered file system, addressing many limitations found in Hyper-V storage with the original release (Windows 2008).  

There are three online modes/states for CSV:
  • Direct Access: In this state, the CSV is available to all nodes in the cluster (i.e. all your VMs) for direct high performance storage access. This is the state you want in production.  
  • Redirected Access: In this state, the CSV is still available to all nodes in the cluster, but all I/O is redirected through a single "coordinator" node. Redirected access is used in planned situations where you need to perform certain disk actions that can't have multiple nodes accessing and locking files concurrently, such as a VSS backup or defrag. Channeling all I/O through a coordinator slows I/O and is more likely to cause bottlenecks for production demands.
  • Maintenance mode: enabling this mode is a safe means to get to a state where processes that require exclusive access to a volume can be used, such as a maintenance routine like chkdsk.

Best Practice: 

  • On the Hyper-V system volume,  pass-through volumes and any other non-CSV volumes, leave Automatic Defragmentation on at all times.
  • Given the performance benefits of Direct Access for cluster shared volumes, leave IntelliWrite on and run an occasional scheduled defrag. This is because of the requirement to use the coordinator node and place the volume into a Redirect Access state. Automatically changing from direct to redirect and back is all part of the file system control (kernel code we co-wrote with MS in the mid 90’s – as a Windows source code licensee), and the mechanism all defragmenters use today - you do not need to do anything special.
  • Correction (June 30, 2011): In the process of testing for the V-locity 3.0 release, we discovered that defagmentation does NOT cause a state change to Redirected Access. This is true for any defragmenter. So, defragment CSVs as you would any other volume. [Apologies on making this statement without validation - we should know better :-)] 

Diskeeper and V-locity are fully compatible with CSVs as confirmed by Windows IT Pro here. The file system control built into Windows is used to defrag, but not used for prevention in the design of IntelliWrite, which is a CSV-compatible file system filter driver (it's very important for drivers to be CSV-compatible) residing at a low altitude, expect on XP (where its altitude is much higher). You can view all file system minifilters and their allocated altitudes here.

IntelliWrite is “DKRtWrt” (its code names in development stages was WriteRight and then later RightWrite -hence "RtWrt"). To see or load/unload filter drivers, use the Filter Manager Control Program (fltmc):

Tags: , , , ,

Defrag | Hyper-V | IntelliWrite | V-Locity

How NTFS Reads a File

by Michael 17. March 2011 11:38

When Windows NT 4.0 was released, Diskeeper 2.0 hit the market. NT 4.0 had limitations about the type of data that could be safely moved online. So, a market-first innovation that Diskeeper brought to market with Diskeeper 3.0 was what we called Boot Time Defragmentation. Boot Time Defragmentation addressed these special data types during the computer boot process, when it was safe to do so. The files that Diskeeper optimized included metadata (data which "lives above" your data), directories (folders), and the paging file. 

Metadata are special files that the NTFS file system driver uses to manage an NTFS volume. The most famous piece of metadata is the MFT (Master File Table), which is a special file typically consisting of 1024-byte records. Each file or directory on the volume is described by at least one of these MFT records. It may take several MFT records to fully describe a file... especially if it is badly fragmented or compressed; A 271MB compressed file can require over 450 MFT records!

Defragmenting the MFT, data files, and folders are all vital for optimal performance. The example below of what occurs when NTFS goes to read in the 1-cluster file \Flintstone\Barney.txt, makes that case.
1. The volume's boot record is read to get the cluster address of the first cluster of the MFT.
2. The first cluster of the MFT is read, which is used to locate all of the pieces of the MFT.
3. MFT record 5 is read as it is predefined to be the MFT record of the root directory.
4. Data cluster 0 of the root directory is read in and searched for "Flintstone".
5. If "Flintstone" is not found, then at least one other data cluster of the root directory needs to be
read to find it.
6. The MFT record for the "Flintstone" directory is read in.
7. Data cluster 0 of the "Flintstone" directory is read in and searched for "Barney.txt".
8. If "barney.txt" is not found, then at least one other data cluster of the "Flintstone" directory needs.
to be read to find it.
9. The MFT record for the "Barney.txt" file is read in
10. Data cluster 0 of the "Barney.txt" file is read in.
This is a worst-case scenario. It presumes the volume is not yet mounted, so the NTFS cache is empty at step 1, and the MFT itself needs to be located.  But it shows how many I/Os are required to get at a file that is only one level removed from the root directory: 10. Each one of those 10 I/Os requires a head movement. Any fragmentation along that path only increases the amount of disk I/Os required to access the data - slowing the whole process down.

And, if you follow the step-by-step I/O sequence outlined above, you'll see that every time a new directory is encountered in the path is an additional two or three I/Os. For obvious performance reasons it is beneficial to keep the depth of your directory structure at a minimum.  It also makes the importance of defragmenting these special file types quite obvious. 

As Windows progressed with newer iterations, many of the files that required offline defragmentation, were supported in online defragmentation (including the MFT and directories), so while Boot Time defrag still exists today, the need to run it has diminished. As a great deal of metadata is typically cached from boot to shutdown, perhaps the last remaining system file that is vital to defragment "offline" is the paging file. We've heard arguments over the years that due to the random nature of the data in the paging file that defrag was not valuable, but anyone who has cleaned up a badly shredded paging file will tell you otherwise.


Defrag | Diskeeper

Diskeeper Receives U.S. Army Certificate of Networthiness

by Colleen Toumayan 9. March 2011 04:10

Diskeeper Corporation, Innovators in Performance and Reliability Technologies, announced that its Diskeeper 2010 performance software has received the Certificate of Networthiness (CoN) from the U.S. Army Network Enterprise Technology Command. The Certificate of Networthiness signifies that Diskeeper performance software is configured to the current Army Golden Master baseline and complies with all U.S. Army and Department of Defense (DoD) standards for security, compatibility and sustainability. A CoN is required for all enterprise software products deployed in the Army Enterprise Infrastructure Network and used by the U.S. Army, all National Guard, Army Reserve and DoD organizations that use the Army Enterprise Infrastructure.


Defrag | Diskeeper | SAN

IBM's Watson would get this one right too...

by Michael 18. February 2011 07:54

On April 10, 2002, Diskeeper enjoyed 15 seconds of TV game-show fame on the TV show Jeopardy with the answer: "Diskeeper is software to do this, reorganize your computer’s files."  The contestant won $2,000 by correctly providing the question, "What is defragment?"



Defrag | Diskeeper | Diskeeper TV

Helping Customers All Across the Globe

by Colleen Toumayan 17. February 2011 08:54

“The initial feedback we're getting from our end users is that the performance improved after we implemented Diskeeper. End users had been suffering from slow access to the databases and the huge systems images. This is much better lately due to using the Diskeeper EnterpriseServer. Our main goal for using the Diskeeper has been achieved and people are feeling the difference."

“We are mainly a Health Sector solutions provider for the whole Middle East and soon North Africa, for variety vendors with high focus on Philips solutions. We are vendor free when it comes to IT products, so we use different brands like IBM, HP with Microsoft platforms only, and on top of all that the medical and health solutions like Cardiac Pictures and Archiving solutions, CPACK, and patients monitoring.



“Please feel free posting my comments. This is the least thing we can afford in paying back your reputable company for availing this fine product.” 

Ayman A. Nimer

Technology Services Segment Manager

Al Faisaliah Medical Systems


Defrag | Success Stories


Comment RSS

Month List


<<  February 2019  >>

View posts in large calendar