Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Windows 8 Released

by Alex Klein 29. October 2012 05:35

Microsoft officially released the next version of Windows last week – Windows 8. While this new release contains various technological advancements, issues with I/O performance and its effect on Windows systems still remains.

Every I/O operation that occurs takes a measureable amount of time. There’s no such thing as an instant I/O request, and simply put, the more I/Os necessary, the longer it will take for Windows to complete a particular task. 

To understand why this is still an issue on Windows 8 and even Windows Server 2012, let’s explore a bit deeper. When data is written within the Windows file system, it is naturally written in a non-optimized way. Thus when an application requests the data, the initial I/O request generally gets broken down and  splits into many additional requests (called split I/Os), and thus increases the time necessary to retrieve the information. So, as this activity naturally occurs on a daily basis, it takes more and more I/O requests and increasingly impacts the performance of your servers and workstations. 

The Windows built-in optimization tool, which is set to run on a weekly basis, attempts to handle the mounting I/O traffic, but that’s after you’ve already experienced the performance impact in the first place. For example, say I’m working on a project on a Tuesday afternoon – how is running the built-in optimization utility on Wednesday going to address this concern?

Proactive Windows I/O acceleration is the key to successful operations and improved response time to users and this is why Condusiv Technologies created our Diskeeper product. Diskeeper’s InvisiTasking and IntelliWrite technologies helps prevent the vast majority of extra I/O requests from occurring and does so without taking any additional resources from the system or other applications. This ensures that you get the least number of I/Os to go to the storage and allows your applications to run that much faster. 

 
In fact, recent independent testing by openBench labs shows up to 98% few I/O requests, server throughput increased by 130% and data throughput up to 5X faster on workstations. You can read more of this report here.

Evaluating IntelliWrite In Your Environment

by Damian 1. March 2012 10:18

IntelliWrite technology has been around for about two years now, optimizing literally millions of systems worldwide. It seamlessly integrates with Windows, delivering optimized writes upon initial I/O (no need for additional, after-the-fact file movement). What does that translate to? Actual fragmentation prevention.

Interestingly, we do occasionally get asked how it bears up against modern storage technologies:

“Don’t the latest SANs optimize themselves?”

“Do I really need this on my VMs? They aren’t physical hard drives, you realize…”

Or even…

“I don’t need to defragment my SAN-hosted VMs.”

Now, there are some factors which must be considered when you’re looking at optimizing I/O in your infrastructure:

  • I/O from Windows is just abstracted Reads and Writes from a higher layer, even directly over a bare metal disk.
  • Due to the way current Windows file systems are structured, I/O can be greatly constrained by file fragmentation—no matter what storage lies underneath it.
  • Fragmentation in Windows means more I/O requests from Windows—even if files are stored perfectly contiguously at the SAN level, Windows still has to send X amount of requests because of the fragmentation that it sees within its top level.
  • File fragmentation is not the same as block-level (read: SAN-level) fragmentation. Many SAN utilities resolve issues of block-level fragmentation admirably; these do not address file fragmentation.
  • Finally, and as noted above, IntelliWrite prevents fragmentation in real time by improving Windows “Best Fit” file write logic. This means solving file fragmentation issues with no additional writes which could create issues with SAN de-dup or various copy-on-write data redundancy measures.

We performed testing with a customer recently in order to validate the benefits of IntelliWrite over cutting-edge storage. This customer’s SAN array is less than a year old, and while we don’t want to go into specifics in order to avoid seeming partial, it’s from one of today’s leading SAN vendors.

Testing involved apples to apples comparison on a production VM hosted over the SAN. A non-random workload was generated 3 times, recording Windows-level file fragmentation, several PerfMon metrics, and time to complete the workload. The test was then repeated 3 times, now with IntelliWrite enabled on the same VM’s test volume.

Here were the results:

 

 

The breakdown:

Fragmentation reduction with IntelliWrite: 89%

Split IO/sec reduction with IntelliWrite: 81%

Avg. Disk Queue Length reduction with IntelliWrite: 71%

…and with the improvement to these disk performance metrics, the overall time to complete the same actual file operations was reduced by: 48%

The conclusion? If you were asking the same sorts of questions posed earlier, evaluate IntelliWrite for yourself. Remember, the graphs above are on contemporary storage hardware—the older your storage equipment, the greater the improvement in application performance you can expect from investing in optimization. Can you afford to not be seeing maximum performance numbers out of your infrastructure and application investments?

The evaluation is quick and fully transparent. Call today to speak with a representative about evaluating Diskeeper or V-locity in your environment.

Tags: , ,

Diskeeper | IntelliWrite | SAN | V-Locity

Webinar: Physical vs. Virtual Bottlenecks: What You Really Need To Know

by Damian 20. February 2012 07:05

Diskeeper Corporation recently delivered a live webinar hosted by Ziff Davis Enterprise. The principle topics covered were:

  • Measuring performance loss in Windows over SAN
  • Identifying client-side performance bottlenecks in private clouds
  • Expanding performance awareness to the client level
  • The greatest and often-overlooked performance issue in a virtual ecosystem

The webinar was co-hosted by:

  • Stephen Deming, Microsoft Partner Solution Advisor
  • Damian Giannunzio, Diskeeper Corporation Field Sales & Application Engineer

Don't miss out on this critical data! If you missed the webinar, you can view the recorded version online here.

Here are some additional, relevant resources:

White Paper: Diskeeper 2011: Improving the Performance of SAN Storage

White Paper: Increasing Efficiency in the IT Environment

White Paper: Inside Diskeeper 2011 with IntelliWrite

White Paper: Running Diskeeper and V-locity on SAN Devices 

Setting the Record Straight - Windows 7 Fragmentation, SSDs, and You

by Howard Butler 21. January 2012 14:50

In today’s well connected world of electronics and instant communications I received a text from a friend asking if I had seen the recent PC World magazine (February, 2012).  He said it had some tidbit of information concerning one of my favorite subjects; system performance, defragmentation, and SSDs.  I located a copy here at the office and found the article. As I read the first line I realized the debate on the virtues of defragmentation especially on SSDs will be one that goes on indefinitely as no one really talks about the issue with supporting hard facts and numbers.  Most articles are rehashing ideas and opinions long since debunked.  They continue to surface because very few truly understand the intricacies of the Windows NTFS file system and that of the storage media, whether it is rotating magnetic hard disks or electronic solid state disks.

So let’s set the record straight… Fragmentation is exponentially more of a problem with today’s data explosion. Defragmenting once a week will still cause the user to experience slowdowns from the degradation effects and doesn’t address the issue when files are initially being written.  And yes, never do a traditional defrag on SSDs.

NTFS file and free space fragmentation happens far more frequently than you might guess.  It has the potential to happen as soon as you install the operating system.  It can happen when you install applications or system updates, access the internet, download and save photos, create e-mail, office documents, etc…  It is a normal occurrence and behavior of the computer system, but does have a negative effect on over all application and system performance.  As fragmentation happens the computer system and underlying storage is performing more work than necessary.  Each I/O request takes a measurable amount of time.  Even in SSD environments there is no such thing as an “instant” I/O request.  Any time an application requests to read or write data and that request is split into additional I/O requests it causes more work to be done.   This extra work causes a delay right at that very moment in time.  Whoever thought that defragmenting once a month or weekly was good enough, simply didn’t understand fragmentation.

Disk drives have gotten faster over the years, but so have CPUs.  In fact, the gap between the difference in speed between hard disks and CPU has actually widened.  This means that applications can get plenty of CPU cycles, but they are still starving to get the data from the storage.  What’s more, the amount of data that is being stored has increased dramatically.  Just think of all those digital photos taken and shared over the holidays.  Each photo use to be approximately 1MB in size, now they are exceeding 15MB per photo and some go way beyond that.  Video editing and rendering and storage of digital movies have also become quite popular and as a result applications are manipulating hundreds of Gigabytes of data.  With typical disk cluster sizes of 4k, a 15MB size file could potentially be fragmented into nearly 4,000 extents.  This means an extra 4,000 disk I/O requests are required to read or write the file.  No matter what type of storage, it will simply take longer to complete the operation.

Suppose I chose to do some editing of my family videos on Tuesday evening.  Even the built-in defragmentation tool in Windows 7 doesn’t do me much good because it isn’t schedule to run until Wednesday morning at 1:00am.  This also means that quite a bit of fragmentation has built up since the previous week when it last ran.  Maybe I’ll manually run it, but that can take quite a while and I’ve wasted time that I would have rather spent on my project.  Unfortunately, the Windows built-in defragmentation utility doesn’t prevent fragmentation so even after running it manually; I still will wind up with fragmentation and slow access speed of my newly created files. 

I’ve often thought about why Wednesday at 1:00am was chosen as the time to schedule defragmentation.  Why isn’t it scheduled all the time?   It is because there could be system resource conflicts that either interfere with getting the task done or the defragmentation process has difficulty throttling back under a variety of conditions.  Regardless, this wait a week to clean up fragmentation doesn’t really help me when I need it most.

As pointed out in the article, the built-in defragmenter does not have the technology advancement to properly deal with fragmentation and SSDs. The physical placement of data on an SSD doesn’t really matter like it does on regular magnetic HDDs.  With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn’t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD’s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O’s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD.  In essence, this makes for more sequential I/O operations which is generally faster and outperforms random writes.

In addition, SSD’s require that old data be erased before new data is written over it, rather than just writing over the old information as with HDDs.  This doubles the wear and tear and can cause major issues with the speed performance and lifespan of the SSD.  Most SSD manufactures have very sophisticated wear-leveling technologies to help with this. The principle issue is write speed degradation due to free space fragmentation.  Small free spaces scattered across the SSD causes the NTFS file system to write a file in fragmented pieces to those small available free spaces.  This has the effect of causing more random I/O traffic that is slower than sequential operations.

I think I have clearly made my point….  The built-in defragmenter in Windows 7 is not a solution for neither the consumer/home user, nor the enterprise business user.  Data access speeds are far more critical in the business world where time is money.  In the enterprise environment there are generally many more files that are used by higher number of users that are accessing data across shared type of storage such as SAN.  Even virtual platforms benefit from the same points covered.  This opens the door and is the reason why robust solutions such as Diskeeper exist.  More data about Diskeeper and the superior technology it offers can be found at http://www.diskeeper.com.

Diskeeper 2011 - Software So Evolutionary Where Can They Go From Here?

by Colleen Toumayan 26. April 2011 04:34
Diskeeper 2011 was covered on Wugnet.  Howard Sobel stated, “They introduced technology that slowed down and prevented fragmentation in Diskeeper 2010 so I thought it was impossible to improve on the concept of defrag much more. Not so! By increasing the efficiency of their algorithms, they have decreased the wear and tear on your hard disk and your computing performance while it works in the background. Decreasing the overall disk activity also decreases your electrical consumption. This may not be a huge savings on "your" electric bill but consider how much savings this amounts to in datacenters where thousands of hard disks are used. So being GREEN doesn't mean paying a penalty. In fact, the opposite is true for Diskeeper. I would go as far as declaring this the software utility "Product of the Year" if we didn't have 8 more months to go in 2011.” The full article is located here: http://www.wugnet.com/tips/this_week.asp  

 

Tags:

Defrag | Diskeeper | IntelliWrite

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar