Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Setting the Record Straight - Windows 7 Fragmentation, SSDs, and You

by Howard Butler 21. January 2012 14:50

In today’s well connected world of electronics and instant communications I received a text from a friend asking if I had seen the recent PC World magazine (February, 2012).  He said it had some tidbit of information concerning one of my favorite subjects; system performance, defragmentation, and SSDs.  I located a copy here at the office and found the article. As I read the first line I realized the debate on the virtues of defragmentation especially on SSDs will be one that goes on indefinitely as no one really talks about the issue with supporting hard facts and numbers.  Most articles are rehashing ideas and opinions long since debunked.  They continue to surface because very few truly understand the intricacies of the Windows NTFS file system and that of the storage media, whether it is rotating magnetic hard disks or electronic solid state disks.

So let’s set the record straight… Fragmentation is exponentially more of a problem with today’s data explosion. Defragmenting once a week will still cause the user to experience slowdowns from the degradation effects and doesn’t address the issue when files are initially being written.  And yes, never do a traditional defrag on SSDs.

NTFS file and free space fragmentation happens far more frequently than you might guess.  It has the potential to happen as soon as you install the operating system.  It can happen when you install applications or system updates, access the internet, download and save photos, create e-mail, office documents, etc…  It is a normal occurrence and behavior of the computer system, but does have a negative effect on over all application and system performance.  As fragmentation happens the computer system and underlying storage is performing more work than necessary.  Each I/O request takes a measurable amount of time.  Even in SSD environments there is no such thing as an “instant” I/O request.  Any time an application requests to read or write data and that request is split into additional I/O requests it causes more work to be done.   This extra work causes a delay right at that very moment in time.  Whoever thought that defragmenting once a month or weekly was good enough, simply didn’t understand fragmentation.

Disk drives have gotten faster over the years, but so have CPUs.  In fact, the gap between the difference in speed between hard disks and CPU has actually widened.  This means that applications can get plenty of CPU cycles, but they are still starving to get the data from the storage.  What’s more, the amount of data that is being stored has increased dramatically.  Just think of all those digital photos taken and shared over the holidays.  Each photo use to be approximately 1MB in size, now they are exceeding 15MB per photo and some go way beyond that.  Video editing and rendering and storage of digital movies have also become quite popular and as a result applications are manipulating hundreds of Gigabytes of data.  With typical disk cluster sizes of 4k, a 15MB size file could potentially be fragmented into nearly 4,000 extents.  This means an extra 4,000 disk I/O requests are required to read or write the file.  No matter what type of storage, it will simply take longer to complete the operation.

Suppose I chose to do some editing of my family videos on Tuesday evening.  Even the built-in defragmentation tool in Windows 7 doesn’t do me much good because it isn’t schedule to run until Wednesday morning at 1:00am.  This also means that quite a bit of fragmentation has built up since the previous week when it last ran.  Maybe I’ll manually run it, but that can take quite a while and I’ve wasted time that I would have rather spent on my project.  Unfortunately, the Windows built-in defragmentation utility doesn’t prevent fragmentation so even after running it manually; I still will wind up with fragmentation and slow access speed of my newly created files. 

I’ve often thought about why Wednesday at 1:00am was chosen as the time to schedule defragmentation.  Why isn’t it scheduled all the time?   It is because there could be system resource conflicts that either interfere with getting the task done or the defragmentation process has difficulty throttling back under a variety of conditions.  Regardless, this wait a week to clean up fragmentation doesn’t really help me when I need it most.

As pointed out in the article, the built-in defragmenter does not have the technology advancement to properly deal with fragmentation and SSDs. The physical placement of data on an SSD doesn’t really matter like it does on regular magnetic HDDs.  With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn’t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD’s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O’s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD.  In essence, this makes for more sequential I/O operations which is generally faster and outperforms random writes.

In addition, SSD’s require that old data be erased before new data is written over it, rather than just writing over the old information as with HDDs.  This doubles the wear and tear and can cause major issues with the speed performance and lifespan of the SSD.  Most SSD manufactures have very sophisticated wear-leveling technologies to help with this. The principle issue is write speed degradation due to free space fragmentation.  Small free spaces scattered across the SSD causes the NTFS file system to write a file in fragmented pieces to those small available free spaces.  This has the effect of causing more random I/O traffic that is slower than sequential operations.

I think I have clearly made my point….  The built-in defragmenter in Windows 7 is not a solution for neither the consumer/home user, nor the enterprise business user.  Data access speeds are far more critical in the business world where time is money.  In the enterprise environment there are generally many more files that are used by higher number of users that are accessing data across shared type of storage such as SAN.  Even virtual platforms benefit from the same points covered.  This opens the door and is the reason why robust solutions such as Diskeeper exist.  More data about Diskeeper and the superior technology it offers can be found at http://www.diskeeper.com.

Diskeeper 2011 - Software So Evolutionary Where Can They Go From Here?

by Colleen Toumayan 26. April 2011 04:34
Diskeeper 2011 was covered on Wugnet.  Howard Sobel stated, “They introduced technology that slowed down and prevented fragmentation in Diskeeper 2010 so I thought it was impossible to improve on the concept of defrag much more. Not so! By increasing the efficiency of their algorithms, they have decreased the wear and tear on your hard disk and your computing performance while it works in the background. Decreasing the overall disk activity also decreases your electrical consumption. This may not be a huge savings on "your" electric bill but consider how much savings this amounts to in datacenters where thousands of hard disks are used. So being GREEN doesn't mean paying a penalty. In fact, the opposite is true for Diskeeper. I would go as far as declaring this the software utility "Product of the Year" if we didn't have 8 more months to go in 2011.” The full article is located here: http://www.wugnet.com/tips/this_week.asp  

 

Tags:

Defrag | Diskeeper | IntelliWrite

Best Practices for CSV defrag in Hyper-V (Windows Server 2008R2)

by Michael 28. March 2011 04:33

One of the most significant features in Windows 2008R2 (for Hyper-V) is Cluster Shared Volumes (CSV) for virtual disks (vhd). This allows NTFS to behave similar to a clustered file system, addressing many limitations found in Hyper-V storage with the original release (Windows 2008).  

There are three online modes/states for CSV:
  • Direct Access: In this state, the CSV is available to all nodes in the cluster (i.e. all your VMs) for direct high performance storage access. This is the state you want in production.  
  • Redirected Access: In this state, the CSV is still available to all nodes in the cluster, but all I/O is redirected through a single "coordinator" node. Redirected access is used in planned situations where you need to perform certain disk actions that can't have multiple nodes accessing and locking files concurrently, such as a VSS backup or defrag. Channeling all I/O through a coordinator slows I/O and is more likely to cause bottlenecks for production demands.
  • Maintenance mode: enabling this mode is a safe means to get to a state where processes that require exclusive access to a volume can be used, such as a maintenance routine like chkdsk.

Best Practice: 

  • On the Hyper-V system volume,  pass-through volumes and any other non-CSV volumes, leave Automatic Defragmentation on at all times.
  • Given the performance benefits of Direct Access for cluster shared volumes, leave IntelliWrite on and run an occasional scheduled defrag. This is because of the requirement to use the coordinator node and place the volume into a Redirect Access state. Automatically changing from direct to redirect and back is all part of the file system control (kernel code we co-wrote with MS in the mid 90’s – as a Windows source code licensee), and the mechanism all defragmenters use today - you do not need to do anything special.
  • Correction (June 30, 2011): In the process of testing for the V-locity 3.0 release, we discovered that defagmentation does NOT cause a state change to Redirected Access. This is true for any defragmenter. So, defragment CSVs as you would any other volume. [Apologies on making this statement without validation - we should know better :-)] 

Diskeeper and V-locity are fully compatible with CSVs as confirmed by Windows IT Pro here. The file system control built into Windows is used to defrag, but not used for prevention in the design of IntelliWrite, which is a CSV-compatible file system filter driver (it's very important for drivers to be CSV-compatible) residing at a low altitude, expect on XP (where its altitude is much higher). You can view all file system minifilters and their allocated altitudes here.

IntelliWrite is “DKRtWrt” (its code names in development stages was WriteRight and then later RightWrite -hence "RtWrt"). To see or load/unload filter drivers, use the Filter Manager Control Program (fltmc):

Tags: , , , ,

Defrag | Hyper-V | IntelliWrite | V-Locity

All Around the World (part deux)

by Colleen Toumayan 18. February 2011 07:07

 

“I am the person who proposed Diskeeper a few years ago in our company because we had some people who were complaining about slow machines. Most of the times the problem was related to the hard disks that were non-stop reading/writing. I tried a few times the internal defragmenter; it helped in reducing the slowness of the machine but it was always for a short time. So I looked for a better product and found Diskeeper.  

I made contact with Diskeeper UK and we had the pleasure to deal with an employee who arranged for an evaluation version of Diskeeper and Diskeeper Administrator to test in our company. 

We have a high number of computers, 16000, which are now running smoothly. The IntelliWrite does a good job preventing fragmentation. The number of calls for slow machines have dropped but we never had real measurements about the performance of Diskeeper. I am very curious about Diskeeper 2011 and what it can bring more so than this version. Diskeeper works and it is a good product. The price is also very good." 

Marc Vanderhaegen, SNCB (Société nationale des Chemins de fer belges)

Desktop Management

Brussels, Belgium

Tags:

IntelliWrite | Success Stories

Defragmenting IT Healthcare

by Michael 20. December 2010 05:18

Joe Marion is founder and Principal of Healthcare Integration Strategies, specializing in the integration of imaging technologies with the overall healthcare IT landscape. His blog (at Healthcare Informatics) covers challenges and opportunities specifically relevant to optimizing Healthcare IT initiatives.

Medical images are a significant percentage of the the world's storage requirements, and have been predicted to encompass an even greater percentage of future storage demand. In Joe's recent blog post he posed the question "Is Defragmentation a Boon to Healthcare IT Performance?"

In his post he includes personal observations and insight into performance implications fragmentation can incur on IT as healthcare departments themselves consolidate and standardize application use:

"With departmental solutions, there very likely was less emphasis on system tools such as defragmentation applications.  Now that PACS technology is becoming more intertwined with the rest of IT, there should be greater emphasis on inclusion of these tools.  In addition, server virtualization can mean that previously independent applications are now part of a virtual server farm."

He also makes the astute observation that centralizing computing and storage magnifies bottlenecks, making a solution such as defragmentation increasingly more vital:

"The addition of disk-intensive applications such as speech recognition and imaging could potentially impact the overall performance of these applications.  As data storage requirements within healthcare grow, the problem will potentially get worse.  Think of the consequence of managing multiple 3000-slice CT studies and performing multiple 3D analyses.  As more advanced visualization applications go the client-server route, the performance of a central server doing the 3D processing could be significantly impacted."

You can read Joe's blog here.

  

Tags: , , ,

Defrag | Diskeeper | IntelliWrite | V-Locity

RecentComments

Comment RSS

Month List

Calendar

<<  July 2018  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar