Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Four Reasons to Migrate from Diskeeper Server to V-locity Server

by Robert Woolery 30. July 2013 08:19

Still on Diskeeper Server? Here’s four reasons to consider migrating to V-locity Server: 

1. High performance. Whereas Diskeeper® Server, highlighted by IntelliWrite® technology, keeps Windows servers running like new, V-locity® Server goes a step beyond split I/O elimination with the inclusion of a server-side caching engine (IntelliMemory) for performance boosts of 50% or more. With frequently-accessed data dynamically cached within available server resources, hot data no longer trudges the full distance from server to storage and back, consuming unnecessary bandwidth.

With IntelliWrite preventing split I/Os on write requests, and IntelliMemory caching active data on reads, this holistic approach to I/O optimization accelerates the entire IT infrastructure since unnecessary I/O traffic is now eliminated before it is pushed through server, network and storage. 
 

2. Network storage. Whereas Diskeeper Server is ideal for local server storage or direct-attached storage (DAS), V-locity Server is designed for network storage (SAN/NAS) since all I/O optimization occurs at the Windows OS layer, leaving the storage device untouched. With IntelliWrite, V-locity Server proactively eliminates split I/Os as close to the application as possible, and by caching active data within available server memory, IntelliMemory eliminates even more unnecessary I/O—preventing I/O traffic from traveling the full distance to storage and back. Since the storage subsystem is now processing considerably less I/O, bottlenecks are eliminated and more bandwidth is available. 

3. Solid-state storage. Already running solid-state in your storage arrays or server PCI-E? V-locity sits at the top of the technology stack at the Windows OS layer so the entire infrastructure—regardless of vendor—reaps the benefit of I/O optimization downstream. V-locity is proactive—meaning it prevents the surplus of unnecessary I/O from ever being created in the first place. This way, your SSD or HDD media isn’t dealing with the I/O mess after it has already wreaked havoc on your environment.

4. Benefit analysis. Unlike Diskeeper, V-locity comes with an embedded performance benchmark that allows users to see the before/after benefit of V-locity in their real-world environment and share the outcome with stakeholders prior to any kind of purchase commitment. This single-page report provides metrics like workload comparison, I/Os per second, latency, and more. 

For high-performance in environments that leverage advanced storage technologies, V-locity Server is the best bet to maximize your existing hardware investment and eliminate performance bottlenecks overnight.

Setting the Record Straight - Windows 7 Fragmentation, SSDs, and You

by Howard Butler 21. January 2012 14:50

In today’s well connected world of electronics and instant communications I received a text from a friend asking if I had seen the recent PC World magazine (February, 2012).  He said it had some tidbit of information concerning one of my favorite subjects; system performance, defragmentation, and SSDs.  I located a copy here at the office and found the article. As I read the first line I realized the debate on the virtues of defragmentation especially on SSDs will be one that goes on indefinitely as no one really talks about the issue with supporting hard facts and numbers.  Most articles are rehashing ideas and opinions long since debunked.  They continue to surface because very few truly understand the intricacies of the Windows NTFS file system and that of the storage media, whether it is rotating magnetic hard disks or electronic solid state disks.

So let’s set the record straight… Fragmentation is exponentially more of a problem with today’s data explosion. Defragmenting once a week will still cause the user to experience slowdowns from the degradation effects and doesn’t address the issue when files are initially being written.  And yes, never do a traditional defrag on SSDs.

NTFS file and free space fragmentation happens far more frequently than you might guess.  It has the potential to happen as soon as you install the operating system.  It can happen when you install applications or system updates, access the internet, download and save photos, create e-mail, office documents, etc…  It is a normal occurrence and behavior of the computer system, but does have a negative effect on over all application and system performance.  As fragmentation happens the computer system and underlying storage is performing more work than necessary.  Each I/O request takes a measurable amount of time.  Even in SSD environments there is no such thing as an “instant” I/O request.  Any time an application requests to read or write data and that request is split into additional I/O requests it causes more work to be done.   This extra work causes a delay right at that very moment in time.  Whoever thought that defragmenting once a month or weekly was good enough, simply didn’t understand fragmentation.

Disk drives have gotten faster over the years, but so have CPUs.  In fact, the gap between the difference in speed between hard disks and CPU has actually widened.  This means that applications can get plenty of CPU cycles, but they are still starving to get the data from the storage.  What’s more, the amount of data that is being stored has increased dramatically.  Just think of all those digital photos taken and shared over the holidays.  Each photo use to be approximately 1MB in size, now they are exceeding 15MB per photo and some go way beyond that.  Video editing and rendering and storage of digital movies have also become quite popular and as a result applications are manipulating hundreds of Gigabytes of data.  With typical disk cluster sizes of 4k, a 15MB size file could potentially be fragmented into nearly 4,000 extents.  This means an extra 4,000 disk I/O requests are required to read or write the file.  No matter what type of storage, it will simply take longer to complete the operation.

Suppose I chose to do some editing of my family videos on Tuesday evening.  Even the built-in defragmentation tool in Windows 7 doesn’t do me much good because it isn’t schedule to run until Wednesday morning at 1:00am.  This also means that quite a bit of fragmentation has built up since the previous week when it last ran.  Maybe I’ll manually run it, but that can take quite a while and I’ve wasted time that I would have rather spent on my project.  Unfortunately, the Windows built-in defragmentation utility doesn’t prevent fragmentation so even after running it manually; I still will wind up with fragmentation and slow access speed of my newly created files. 

I’ve often thought about why Wednesday at 1:00am was chosen as the time to schedule defragmentation.  Why isn’t it scheduled all the time?   It is because there could be system resource conflicts that either interfere with getting the task done or the defragmentation process has difficulty throttling back under a variety of conditions.  Regardless, this wait a week to clean up fragmentation doesn’t really help me when I need it most.

As pointed out in the article, the built-in defragmenter does not have the technology advancement to properly deal with fragmentation and SSDs. The physical placement of data on an SSD doesn’t really matter like it does on regular magnetic HDDs.  With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn’t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD’s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O’s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD.  In essence, this makes for more sequential I/O operations which is generally faster and outperforms random writes.

In addition, SSD’s require that old data be erased before new data is written over it, rather than just writing over the old information as with HDDs.  This doubles the wear and tear and can cause major issues with the speed performance and lifespan of the SSD.  Most SSD manufactures have very sophisticated wear-leveling technologies to help with this. The principle issue is write speed degradation due to free space fragmentation.  Small free spaces scattered across the SSD causes the NTFS file system to write a file in fragmented pieces to those small available free spaces.  This has the effect of causing more random I/O traffic that is slower than sequential operations.

I think I have clearly made my point….  The built-in defragmenter in Windows 7 is not a solution for neither the consumer/home user, nor the enterprise business user.  Data access speeds are far more critical in the business world where time is money.  In the enterprise environment there are generally many more files that are used by higher number of users that are accessing data across shared type of storage such as SAN.  Even virtual platforms benefit from the same points covered.  This opens the door and is the reason why robust solutions such as Diskeeper exist.  More data about Diskeeper and the superior technology it offers can be found at http://www.diskeeper.com.

We want your feedback!

by Michael 17. December 2009 06:35

Our products have largely been built from customers (and trialware users) telling us what they need and want in new products and new versions of existing solutions. We get mountains of valuable user feedback through our employees who work directly with customers including; our sales reps, our customer service staff, and our tech support team. That all channels back to product management, and eventually over to the developers to build into new technologies.

If you have ideas or requests you'd like to see us build for you in the future there is an easily accessible way to tell us. 

A few versions back, we introduced a feature in Diskeeper. You can access it from the Menu Bar [Action - Diskeeper Feedback].

That selection will take you to our online feedback form:

There are a few drop down selection to help categorize your suggestion, and then some open fields to share your idea(s). Everyone is welcome to submit, and all ideas are reviewed.

Please do keep in mind that this is for feedback for future development, and is not a support line for assistance. Please use the standard support lines if you need immediate help.

We look forward to hearing from you. 

Tags:

General

You CAN have your cake and eat it too

by Michael 30. October 2009 10:39

Diskeeper 2010 RTM'ed (Release To Manufacturing) earlier this week, so we celebrated with cake; a Diskeeper cake in the image of the new DVD case that is. 

 

Tags:

General

Cool Customer Quote: Up to 40% System Performance Increase

by Colleen Toumayan 2. September 2009 12:18

Diskeeper has been one of our main implementation tools for any new Windows Server-based product. HBR Solutions has been using Diskeeper for nearly 8 years, and have found it to be a critical part of our system configurations.  

Our clients understand the importance of highly efficient data access, and Diskeeper Corporation has always kept their promise on making sure that the drives are always set to peak performance. Whether we run the product under a NAS, SAN or DAS, Diskeeper has proved that it can manage any type of hardware design and configuration including virtual servers.  
 
In the majority of our client's install base, Diskeeper has shown a substantial increase in performance once installed and configured on existing systems. The hardware is a mix of IBM Shark SAN, EMC, Dell (DAS and internal) and HP (internal).

As for the NAS, DAS an SAN results, our analysis pertains to clients who had the hardware in place and were looking at upgrading their hardware. However, once we installed and configured Diskeeper on those systems, we saw performance that went up to nearly 40% better on the same hardware platform.

We are very pleased with Diskeeper and will continue to the use their product on our future endeavors.

 
Steven Bond
HBR Solutions Inc
Aberdeen, NJ

Tags: ,

General

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar