Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

The Mystery of the Disappearing MFT Reserved Zone!

by Michael 7. November 2006 22:50

The Master File Table (MFT) is perhaps the best known metadata file in an NTFS file system. Essentially it works as a "table of contents" for all the files on your volume describing attributes such as file name or the location of the file extents on a volume, and in some cases part of a file's data as well (in some cases, all of file's data). For every file, there exists at least one record (standard - 1KB in size) in the MFT for a file. When the attributes (e.g. data) of a file, an attribute could also mean the actual data itself (not just descriptors), could not fit in the 1KB record, it is written on the disk outside the MFT. These file attributes, typically the file's data, are known as non-resident attributes. Non-resident describes the fact that they do not reside wholly in the MFT (where resident attributes* define the exact opposite).

As more files are added to an NTFS volume, additional records are created within the MFT to define those files. As files are deleted from a volume the space they occupied in the Master File Table (the file's former record) is marked as available for use, but it is not deleted. Unlike some tools designed for databases or virtual disks, there is no tool or native action to shrink the MFT down to only the actual "in use" records. In essence this means the MFT can grow but will never shrink.

Microsoft file system developers thought ahead as they initially created the file system. They wanted to mitigate fragmentation of this key metadata file. They implemented an extension to this MFT file for future growth. The design they devised is known as the MFT reserved zone. It is a reserved area of free disk space located at the end of the currently allocated MFT space. As new records need to be created, the design was that they would expand into this reserved area, not randomly elsewhere on the disk, and hence not fragment the expanded MFT file.

Up through Windows 2000, this reserved area was relatively fixed, though in Win2K, you could edit the size. In most all cases the reserved zone would go unused and eventually allow for allocation of non-resident attributes as all other free space on a volume, other than the MFT reserved space, is filled up. If that occurred, the likelihood of the MFT fragmenting becomes almost a sure bet.

In Windows XP the behavior of the reserved zone changed. It went from being a hard coded percentage of the volume (default of 12.5%), to a dynamic extension called, by Microsoft, an "NTFS internal hint". While Diskeeper received information on this change early in XP's development, little data has been published on this from Microsoft. Given that we are talking about file system minutia that few care about or even need to be aware of, I can't blame them. I have attached the following as one of few sources from "the horse's mouth" on the subject.

That takes me, finally, to the purpose of this blog!

As I mention all too frequently, I frequent online technical forums and help out PC enthusiasts by sharing what I have come to learn or have researched about Window's file systems. I have noticed, in numerous forums, questions regarding the "disappearance of the reserved zone" on Windows XP clients.

Many users, possibly well familiar with Windows 2000 or adroitly attuned to XP over a period time, notice that the reserved space on XP volumes shrinks or disappears from views in a Diskeeper analysis (or that of the built-in defragmenter). While almost all note it has not affected their performance (it would not) there is understandably a concern as this is a change from what they are familiar with.

As empirical evidence presented by these users would suggest, it is not a performance issue, and is perfectly normal.

If you are curious as to the current size of this reserved zone on your NTFS volumes you can use fsutil.exe (file system utility), or for easier-to-read information (i.e. not in hexadecimal!), use NTFSinfo.exe from Microsoft Sysinternals

*Accessing resident attributes is much faster than accessing non-resident attributes. That is because the MFT record has to be accessed irregardless when accessing a file. If the request does not have to extend past the file record in the MFT to non-resident extents, that translates to less seek time (mapping extents) and disk head movement (retrieving data from the disk).



Idle Resources Graph - requirement

by Michael 4. November 2006 01:07
This is brief update. Diskeeper 2007 uses Windows system counters to enhance awareness of other activties on the system. The Idle Resource Graph (also known as the InvisiTasking graph) requires access to performance counters, which are enabled by default on Windows 2000+ systems. A skillfull Diskeeper user detected that disabling paging (running without a paging file) on a x32-bit system and running Diskeeper causes the disabling of required performance libraries (disable performance counters). This causes the Diskeeper graph to show a blank screen and manual defragmentation jobs to error out. Perhaps the only legitimate reason for remiving a paging file is to conserve battery life/power consumption so it is obviously a very rare circumstance. So, for the time being, if you are running Diskeeper 2007, be sure to keep a paging file on one of your system's volumes.



Comparing I-FAAST

by Michael 3. November 2006 23:55

One of the topics that a reader requested to be covered in the Diskeeper blog was "what makes I-FAAST different than other file placement/sequencing strategies available on the market?" Don't let the title of this entry mislead you, as this blog entry is not intended to be a head-to-head/which-is-better comparison. I think that would be unprofessional, and it is obviously biased (hmmm... I wonder who I would pick?), so it would amount to nothing more than self-serving benchmarketing. I will also steer clear of making "assumptions" (for those familiar with the expression).

What I will do, is present facts - as I understand them, and let you make the decisions. I'm admittedly not the expert on the file ordering strategies other products use, so I suggest you confirm behavior/design with them. Diskeeper's number one competitor is the product we gave to Microsoft for inclusion into Windows back in the late 90's. The Diskeeper product has enjoyed market dominance akin to that of Microsoft's Windows OS in the desktop space. The biggest challenge we face is public education of what fragmentation is, and what it does to file system performance (like what many of you do probably regularly do for co-workers, friends, relatives, etc... about computers in general). Therefore, most of my knowledge is centered around Windows and Diskeeper and how they relate to performance (not what some other file system tool is doing). On that matter, there are many other numerous Windows file system experts, but very few others are expert on I-FAAST or Diskeeper, so please, if you ever have a question; just ask me!

For the record, I'm a very careful buyer, so I need proof that product claims are legitimate. For example, I never believe a car manufacturer's reported Miles-Per-Gallon estimates. Maybe I drive like a maniac, but my dear sweet old gradma apparently had more of a leadfoot than the drivers hired to gather those MPG numbers!

As I mentioned, while I do know I-FAAST, I'm also fairly well versed in NTFS. I'll present added depth on these topics and how they relate to the topic at hand.

As a prerequisite, I strongly suggest reading the following brief and relatively easy to read paper to better understand the file write behavior of NTFS:

You can watch a flash video of this document in the FAQ section of the Diskeeper Multimedia Tour in the chapter titled "How does fragmentation occur?".

The suggested reading/viewing will provide you the necessary background to better gauge the value of given file arrangement strategies.

It also very important to note that NTFS takes effort to write new iterations of a given file nearby its previous allocation. That's not directly covered in the above documents, but it is key when discussing file placement for the express purpose of reducing future fragmentation.


Placing files by modification or usage into certain logical regions of the volume is done by several vendors, including Diskeeper (based on frequency of use, not a file attribute). Claims, made by some vendors, that this minimizes future defragmentation time and effort as resources can be focused elsewhere are probably very valid. However, this brings up another point to mention - defrag algorithms need not rely on placing files logically on a volume to ignore unchanged data and concentrate on new fragmentation. That one vendor accomplishes this by moving files around doesn't mean another cannot do so without having to move files "out of the way". If speed of non-manual defragmentation is deemed important, does that suggest that whatever form of automation is offered must complete quickly to "get out of the way" because it interferes with system use? Ah, but I have digressed...

Pertaining to the positioning of free space into some geographic location; perhaps the craziest thing I've read about defragmentation strategy is that is that every file on a disk is moved around to defragment one file with an extent (fragment) located at the front of the disk, tightly packed in with other files, and another extent somewhere else in a pool of free space. A really, really, REALLY bad algorithm might do that, but I've yet to see one that ridiculous.

Now, with the understanding that new iterations of existing files are likely to be written near the original version of that same file, if all the files that change frequently are grouped together, that region on the disk would incur dramatic and constant file and free space fragmentation. I could then make the argument that because all the files that regularly change are intermingled, that all the "small" free spaces left behind by other changed files would be deemed "nearby" and therefore be even more likely to be considered best-fit free space candidates for a newly modified file (i.e a file that the defrag program deems frequently used). That is hypothetical, in the same way that placing a large chunk of free space near this region to reduce re-fragmentation is. The point is that I can make a reasonable argument why it might not work. I may well be wrong, but we do know that a defragmenter cannot control where the OS decides to write files. The proof is in the pudding, so ask the vendor for a test case, independent verification/analysis, etc...

Remember that files are written to disk via a write cache using a lazy write method. That cache can be flushed to the disk, either by filling up, or by forced command from the application writing the file. The lazy write will routinely, once per second, queue an amount of data to be written to disk, throttling the write operation when it is determined it may negatively affect performance. The write back cache with lazy write process allows for relatively consolidated and unobtrusive file writes, but consequentially can still create fragmentation. It is, and always has been, a trade-off.

Other blog entries on I-FAAST have described what I-FAAST is and how it works, so I won't duplicate that info here, but I will clear up a few other confusions I've heard about the technology. I-FAAST maintains consistency of the XP boot optimization zone (a technology Diskeeper co-developed with Microsoft). It also optimizes a large chunk of free space, adjacent to where the frequently accessed files reside, near the front of the volume. That free space chuck is specifically in a location so that new file writes can be accelerated. Note I use the word "can" as no defrag vendor can control or wholly predict the NTFS algorithms for new file writes. However, if the file being written is a modification of an existing file deemed frequently used (by I-FAAST), then there is an increased likelihood that it will be written in that free space segment and not towards the back of the volume. And not to beat a dead horse, but to claim that fragmentation of new file writes will be reduced, while possible to say (after all it might happen), is impossible to guarantee.

Arranging files alphabetically is another strategy. Read for a good overview of how NTFS accesses a file. If you want a good idea of what files Windows accesses in the boot up process, and in what relative order, open the layout.ini file on any XP system with any text editor. That should provide better insight.

The "last modified" file attribute is another best-guess approach. Given that the Indianapolis Colts of the NFL are undefeated 8 weeks into the season, I could make a reasonable assertion they will win the SuperBowl. You could certainly raise valid arguments to that assumption, and be very right. It's a guess based on some data, but is imperfect. Taking one data point - i.e. data that a file changed or was created or accessed recently, does not provide any reasonable indicator that it will change or be accessed again (e.g. many of the files associated with a Windows service pack). Perhaps the only benefit here is to move files that have not recently changed or been accessed elsewhere. But again, to what end? So that a future defrag can run quicker?

What about file strategy patents, you may ask? Simply because a product employs patented technology, does not guarantee it is valuable. It just means it does something unique. I could invent octagonal tires, but I don't think any but would want to drive to work with them :). Keep in mind that patents are published and available to read at places like or - you should investigate it. You can find out when the patent was written (e.g. the 80's), and learn whether it was patented on the Windows NT platform, or if it was designed for the current version of NTFS (NTFS has changed dramatically since NT4)? You may also want to research that what is defined in the patent is still what is done in the product today? Is it still relevant? If the patent used the rote attribute of last accessed time, is that still the current design application? All are good points to investigate.

Cool-sounding theory captures buyer interest, but still has to proof itself in practice or it's relegated to an undelivered promise or some idea that should have stayed on a napkin. If a vendor makes a claim, ask them to provide tangible proof! It's your money, so I think that's a fair request?

And now for the differences:

Now let's give the benefit of the doubt and assume (ok I am making an assumption after all) that strategies that minimize future defragmentation run times work, and that future fragmentation is somehow mitigated, now what? Well, I-FAAST increases file access speed. That is very different than the expressed purpose(s) of other technologies, so you can't really compare them anyway. And remember that I-FAAST and Diskeeper's standard defragmentation run in RealTime, so they address fragmentation near-immediately (right after/soon after a file leaves memory). The standard real-time defragmentation is aware of I-FAAST file sequencing and will not undo its efforts.

You may hear that defrag is an I/O intensive process. While it is true that I/O activity must occur (in order to prevent excess I/O in the future to file fragments), that operation need not be intrusive. That is what InvisiTasking solves. While it addresses all major computer resources, it also absolves the interference of defrag overhead with respect to disk I/O. Yes, I-FAAST will move some files, but it does not shuffle them around regularly - it is an intelligent technology. Sequenced files are moved only if their usage frequency changes in relation to other data on the volume.

I-FAAST is one of the technologies that affords Diskeeper the ability to call itself more than a defragmenter, and raise the bar to a file system performance application. As I regularly mention, I-FAAST delivers what is promised, admittedly sometimes its only a few percent faster file access, but it is genuine. There are no best-guess efforts with this technology, it either works or it doesn't, and it tells you exactly what it will provide.

I'll end this blog with the comment, for the third time [I think I've made my point :)], that you should always go to the manufacturer/developer to learn more about how a technology works. The manufacturers are here to help you, and are the best resource to answer your questions. Hopefully I've provided some data that will help you make informed decisions about Diskeeper and things to look for when evaluating other technologies. I believe, as I stated before in a previous blog, that there are many good options on the market these days. Most any defragmenter is going to improve your computer's performance. Choosing a third party solution is likely to offer additional benefits/performance and reduced overhead (especially in a business network). It's up to you to determine what strategy is more valuable. It's great in our, mostly free-trade, world economy to have a choice. And, while I'm partial to Diskeeper, ultimately the decision rests with you - the customer. And, as I firmly believe, the customer is always right. You made, or will make, a decision for your own very valid reasons - whatever they may be.



Performance Counters and Diskeeper

by Michael 1. November 2006 15:03
Windows operating systems include performance monitoring. The information collected by specific counters can be used by applications to make informed decisions. Diskeeper has been using certain counters for several versions, and leverages this information significantly in v2007. There are seperate counters for your major componenets; processor, disk, memory, network. Windows employs a filter driver to collect data and report on these components. If you've ever run perfmon, you can see v2007 uses the data generated by these drivers. It will programatically



What Does Diskeeper Mean By "RealTime"?

by Michael 27. October 2006 00:22
With Diskeeper 2007's introduction of real-time file system performance, some understandable questions arise or are reborn. Exactly how "Johnny on the spot" is Diskeeper about file fragmentation and free space consolidation? Or for that matter, I-FAAST file sequencing or directory consolidation? A concern might be defragmenting temporary files that won't hang around very long, and are likely be maintained in cache for their entire existence. How about excess power consumption and resultant heat from being busy all the time? What about wear and tear on the drives? Won't a regularly active process cause the drive to have to work more? These are great questions!




Comment RSS

Month List


<<  August 2019  >>

View posts in large calendar