Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

The Secret to Optimizing the Big Virtual Data Explosion

by Alex Klein 29. May 2012 09:21
In today’s day and age, many SMBs and enterprise-level businesses are “taking to the skies” with cloud computing. These companies realize that working in the cloud comes with many benefits – including reduced cost of in-house hardware, ease of implementation and seamless scalability. However, as you will read on and discover - performance-impacting file fragmentation and the need for defragmentation still exists and is actually amplified in these environments. Based on these factors, it must now be addressed with a two-fold proactive and preventative solution.

Let’s face it – we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files. Now regardless of destination, most of this data is generated on Windows-based computers, which are known to fragment files. Therefore, when files are manipulated they become fragmented before even reaching the cloud. This occurs because as they are worked with, they get broken up into various pieces and scattered to numerous locations across the hard disk. The result is increased time necessary to access these files and affects system performance.

So how does the above scenario affect the big picture? To understand this, let’s take a closer look at your cloud environment. Your data, and in many cases, much of your infrastructure, has “gone virtual”. Users are able to access applications and work with their data basically anywhere in the world. In such an atmosphere, where the amount of RAM and CPU power available is dramatically increased and files are no longer stored locally, how can the need for defragmentation still be an issue?

Well, what do you think happens when all this fragmented data comes together? The answer is an alarming amount of fragmented Big Data that’s now sitting on the hard drives of your cloud solution. This causes bottlenecks that can severely impact your mission-critical applications due to the large-scale unnecessary I/O cycles needed to process the broken up information.

At the end of the day, traditional approaches to defragmentation just aren’t going to cut it anymore and it’s going take the latest software technology implemented on both sides of the cloud to get these issues resolved. It starts with software, such as Diskeeper 12, installed on every local workstation and server, to prevent fragmentation at its core. Added to this is deploying V-locity software across your virtualized network. This one-two punch defragmentation software solution addresses I/O performance concerns, optimizes productivity and will push cloud computing further than you ever thought possible. In these exciting times of emerging new technologies, Cloud computing can send your business soaring or keep it grounded - the choice is up to you.

Tags:

Big Data | Cloud | Defrag | Diskeeper | virtualization | V-Locity

Evaluating IntelliWrite In Your Environment

by Damian 1. March 2012 10:18

IntelliWrite technology has been around for about two years now, optimizing literally millions of systems worldwide. It seamlessly integrates with Windows, delivering optimized writes upon initial I/O (no need for additional, after-the-fact file movement). What does that translate to? Actual fragmentation prevention.

Interestingly, we do occasionally get asked how it bears up against modern storage technologies:

“Don’t the latest SANs optimize themselves?”

“Do I really need this on my VMs? They aren’t physical hard drives, you realize…”

Or even…

“I don’t need to defragment my SAN-hosted VMs.”

Now, there are some factors which must be considered when you’re looking at optimizing I/O in your infrastructure:

  • I/O from Windows is just abstracted Reads and Writes from a higher layer, even directly over a bare metal disk.
  • Due to the way current Windows file systems are structured, I/O can be greatly constrained by file fragmentation—no matter what storage lies underneath it.
  • Fragmentation in Windows means more I/O requests from Windows—even if files are stored perfectly contiguously at the SAN level, Windows still has to send X amount of requests because of the fragmentation that it sees within its top level.
  • File fragmentation is not the same as block-level (read: SAN-level) fragmentation. Many SAN utilities resolve issues of block-level fragmentation admirably; these do not address file fragmentation.
  • Finally, and as noted above, IntelliWrite prevents fragmentation in real time by improving Windows “Best Fit” file write logic. This means solving file fragmentation issues with no additional writes which could create issues with SAN de-dup or various copy-on-write data redundancy measures.

We performed testing with a customer recently in order to validate the benefits of IntelliWrite over cutting-edge storage. This customer’s SAN array is less than a year old, and while we don’t want to go into specifics in order to avoid seeming partial, it’s from one of today’s leading SAN vendors.

Testing involved apples to apples comparison on a production VM hosted over the SAN. A non-random workload was generated 3 times, recording Windows-level file fragmentation, several PerfMon metrics, and time to complete the workload. The test was then repeated 3 times, now with IntelliWrite enabled on the same VM’s test volume.

Here were the results:

 

 

The breakdown:

Fragmentation reduction with IntelliWrite: 89%

Split IO/sec reduction with IntelliWrite: 81%

Avg. Disk Queue Length reduction with IntelliWrite: 71%

…and with the improvement to these disk performance metrics, the overall time to complete the same actual file operations was reduced by: 48%

The conclusion? If you were asking the same sorts of questions posed earlier, evaluate IntelliWrite for yourself. Remember, the graphs above are on contemporary storage hardware—the older your storage equipment, the greater the improvement in application performance you can expect from investing in optimization. Can you afford to not be seeing maximum performance numbers out of your infrastructure and application investments?

The evaluation is quick and fully transparent. Call today to speak with a representative about evaluating Diskeeper or V-locity in your environment.

Tags: , ,

Diskeeper | IntelliWrite | SAN | V-Locity

Webinar: Physical vs. Virtual Bottlenecks: What You Really Need To Know

by Damian 20. February 2012 07:05

Diskeeper Corporation recently delivered a live webinar hosted by Ziff Davis Enterprise. The principle topics covered were:

  • Measuring performance loss in Windows over SAN
  • Identifying client-side performance bottlenecks in private clouds
  • Expanding performance awareness to the client level
  • The greatest and often-overlooked performance issue in a virtual ecosystem

The webinar was co-hosted by:

  • Stephen Deming, Microsoft Partner Solution Advisor
  • Damian Giannunzio, Diskeeper Corporation Field Sales & Application Engineer

Don't miss out on this critical data! If you missed the webinar, you can view the recorded version online here.

Here are some additional, relevant resources:

White Paper: Diskeeper 2011: Improving the Performance of SAN Storage

White Paper: Increasing Efficiency in the IT Environment

White Paper: Inside Diskeeper 2011 with IntelliWrite

White Paper: Running Diskeeper and V-locity on SAN Devices 

Setting the Record Straight - Windows 7 Fragmentation, SSDs, and You

by Howard Butler 21. January 2012 14:50

In today’s well connected world of electronics and instant communications I received a text from a friend asking if I had seen the recent PC World magazine (February, 2012).  He said it had some tidbit of information concerning one of my favorite subjects; system performance, defragmentation, and SSDs.  I located a copy here at the office and found the article. As I read the first line I realized the debate on the virtues of defragmentation especially on SSDs will be one that goes on indefinitely as no one really talks about the issue with supporting hard facts and numbers.  Most articles are rehashing ideas and opinions long since debunked.  They continue to surface because very few truly understand the intricacies of the Windows NTFS file system and that of the storage media, whether it is rotating magnetic hard disks or electronic solid state disks.

So let’s set the record straight… Fragmentation is exponentially more of a problem with today’s data explosion. Defragmenting once a week will still cause the user to experience slowdowns from the degradation effects and doesn’t address the issue when files are initially being written.  And yes, never do a traditional defrag on SSDs.

NTFS file and free space fragmentation happens far more frequently than you might guess.  It has the potential to happen as soon as you install the operating system.  It can happen when you install applications or system updates, access the internet, download and save photos, create e-mail, office documents, etc…  It is a normal occurrence and behavior of the computer system, but does have a negative effect on over all application and system performance.  As fragmentation happens the computer system and underlying storage is performing more work than necessary.  Each I/O request takes a measurable amount of time.  Even in SSD environments there is no such thing as an “instant” I/O request.  Any time an application requests to read or write data and that request is split into additional I/O requests it causes more work to be done.   This extra work causes a delay right at that very moment in time.  Whoever thought that defragmenting once a month or weekly was good enough, simply didn’t understand fragmentation.

Disk drives have gotten faster over the years, but so have CPUs.  In fact, the gap between the difference in speed between hard disks and CPU has actually widened.  This means that applications can get plenty of CPU cycles, but they are still starving to get the data from the storage.  What’s more, the amount of data that is being stored has increased dramatically.  Just think of all those digital photos taken and shared over the holidays.  Each photo use to be approximately 1MB in size, now they are exceeding 15MB per photo and some go way beyond that.  Video editing and rendering and storage of digital movies have also become quite popular and as a result applications are manipulating hundreds of Gigabytes of data.  With typical disk cluster sizes of 4k, a 15MB size file could potentially be fragmented into nearly 4,000 extents.  This means an extra 4,000 disk I/O requests are required to read or write the file.  No matter what type of storage, it will simply take longer to complete the operation.

Suppose I chose to do some editing of my family videos on Tuesday evening.  Even the built-in defragmentation tool in Windows 7 doesn’t do me much good because it isn’t schedule to run until Wednesday morning at 1:00am.  This also means that quite a bit of fragmentation has built up since the previous week when it last ran.  Maybe I’ll manually run it, but that can take quite a while and I’ve wasted time that I would have rather spent on my project.  Unfortunately, the Windows built-in defragmentation utility doesn’t prevent fragmentation so even after running it manually; I still will wind up with fragmentation and slow access speed of my newly created files. 

I’ve often thought about why Wednesday at 1:00am was chosen as the time to schedule defragmentation.  Why isn’t it scheduled all the time?   It is because there could be system resource conflicts that either interfere with getting the task done or the defragmentation process has difficulty throttling back under a variety of conditions.  Regardless, this wait a week to clean up fragmentation doesn’t really help me when I need it most.

As pointed out in the article, the built-in defragmenter does not have the technology advancement to properly deal with fragmentation and SSDs. The physical placement of data on an SSD doesn’t really matter like it does on regular magnetic HDDs.  With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn’t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD’s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O’s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD.  In essence, this makes for more sequential I/O operations which is generally faster and outperforms random writes.

In addition, SSD’s require that old data be erased before new data is written over it, rather than just writing over the old information as with HDDs.  This doubles the wear and tear and can cause major issues with the speed performance and lifespan of the SSD.  Most SSD manufactures have very sophisticated wear-leveling technologies to help with this. The principle issue is write speed degradation due to free space fragmentation.  Small free spaces scattered across the SSD causes the NTFS file system to write a file in fragmented pieces to those small available free spaces.  This has the effect of causing more random I/O traffic that is slower than sequential operations.

I think I have clearly made my point….  The built-in defragmenter in Windows 7 is not a solution for neither the consumer/home user, nor the enterprise business user.  Data access speeds are far more critical in the business world where time is money.  In the enterprise environment there are generally many more files that are used by higher number of users that are accessing data across shared type of storage such as SAN.  Even virtual platforms benefit from the same points covered.  This opens the door and is the reason why robust solutions such as Diskeeper exist.  More data about Diskeeper and the superior technology it offers can be found at http://www.diskeeper.com.

10 things you can do to boost PC performance (by TechRepublic)

by Michael 20. September 2011 08:43

IT professional and author, Justin James of TechRepublic published a top 10 list of ways to speed up your PC. Number 9 was one very familar to us:

"9: Defrag. Defragging your hard drives is a great way to get some more performance. While modern Windows systems automatically defrag on a regular basis, I’ve found that the Windows defragging is fairly unaggressive. We’ve reviewed a lot of different defrag apps here at TechRepublic. I suggest that you check out your alternatives and find one that does a better job for you."

Their findings mimic what we see with many of our business customer seeking to maximize Windows 7 performance. The built in defragmenter sounds like an attractive option at first, but closer inspection and testing clearly demonstrates significant value (better ROI) in advanced third party optimization technology. 

Read the whole article here: http://www.techrepublic.com/blog/10things/10-things-you-can-do-to-boost-pc-performance/2712?tag=nl.e101 

Tags:

Diskeeper | Windows 7

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar