Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

First-ever “Time Saved” Dashboard = Holy Grail for ROI

by Brian Morin 2. November 2016 10:03

If you’ve ever wondered about the exact business value that Condusiv® I/O reduction software provides to your systems, the latest “time saved” reporting does exactly that.

Prior to V-locity® v6.2 for virtual servers and Diskeeper® 16 for physical servers and endpoints, customers would conduct expansive before/after tests to extract the intrinsic performance value, but struggled to extract the ongoing business benefit over time. This has been especially true during annual maintenance renewal cycles when key stakeholders need to be “re-sold” to allocate budget for ongoing maintenance, or push new licenses to new servers.

The number one request from customers has been to better understand the ongoing business benefit of I/O reduction in terms that are easily relatable to senior management and makes justifying the ROI painless. This “holy grail” search on part of our engineering team has led to the industry’s first-ever “time saved” dashboard for an I/O optimization software platform.

When Condusiv software proactively eliminates the surplus of small, fractured writes and reads and ensures more “payload” with every I/O operation, the net effect is fewer write and read operations for any given workload, which saves time. When Condusiv software caches hot reads within idle, available DRAM, the net effect is fewer reads traversing the full stack down to storage and back, which saves time.

In terms of benefits, the new dashboard shows:

    1. How many write I/Os are eliminated by ensuring large, clean, contiguous writes from Windows

    2. How many read I/Os are cached from idle DRAM

    3. What percentage of write and read traffic is offloaded from underlying SSD or HDD storage

    4. Most importantly – the dashboard relates I/O reduction to the business benefit of … “time saved”

This reporting approach makes the software fully transparent on the type of benefit being delivered to any individual system or groups of systems. Since the software itself sits within the Windows operating system, it is aware of latency to storage and understands just how much time is saved by serving an I/O from DRAM instead of the underlying SSD or HDD. And, most importantly, since the fastest I/O is the one you don’t have to write, Condusiv software understands how much time is saved by eliminating multiple small, fractured writes with fewer, larger contiguous writes.  

Have you ever wondered how much time V-locity will save a VDI deployment? Or an application supported by all-flash? Or a Hyperconverged environment? Rather than wonder, just install a 30-day version of the software and monitor the “time saved” dashboard to find out. Benefits are fully transparent and easily quantified.

Have you ever needed to justify Diskeeper’s endpoint solution across a fleet of corporate laptops with SSDs? Now you can see the “time saved” on individual systems or all systems and quantify the cost of labor against the number of hours that Diskeeper saved in I/O time across any time period. The “no brainer” benefit will be immediately obvious.

Customers will be pleasantly surprised to find out the latest dashboard doesn’t just show granular benefits but also granular performance metrics and other important information to assist with memory tuning. See the avg., min, and max of idle memory used for cache over any time period (even by the hour) to make quick assessments on which systems could use more memory to take better advantage of the caching engine for greater application performance. Customers have found if they can maintain at least 2GB used for cache, that's where they begin to get into the sweet spot of what the product can do. If even more can be maintained to establish a tier-0 cache strategy, performance rises even further. Systems with at least 4GB idle for cache will invariably serve 60% of reads or more. 

 

 

       Lou Goodreau, IT Manager, New England Fishery

      “32% of my write traffic has been eliminated and 64% of my read traffic has been cached within idle memory. This saved over 20 hours in I/O time after 24 days of testing!”

       David Bruce, Managing Partner, David Bruce & Associates

                                    “Over 50% of my reads are now served from DRAM and over 30% of write traffic has

                                   been eliminated by ensuring large, contiguous writes. Now everything is more

                                   responsive!"

 

New! Diskeeper 16 Guarantees “Faster than New” Performance for Physical Servers and PCs

by Brian Morin 26. September 2016 09:56

The world’s most popular defragmentation software for physical servers and PCs makes “defrag” a thing of the past and delivers “faster than new” performance by dynamically caching hot reads with idle DRAM.  As a result, Diskeeper® 16 guarantees to solve the toughest application performance issues on physical servers like MS-SQL and guarantees to fix sluggish PCs with faster than new performance or your money back for 90 days – no questions asked.

The market is still catching up to the fact that Diskeeper’s newest patented engine no longer “defrags” but rather proactively eliminates fragmentation with large, sequential writes from Windows to underlying HDDs, SSDs, and SAN storage systems. This eliminates the “death by a thousand cuts” scenario of small, tiny writes and reads that inflates I/Os per second, robs throughput, and shortens the lifespan of HDDs and SSDs alike. However, the biggest new announcement has to do with the addition of DRAM caching – putting idle DRAM to good use by serving hot reads without memory contention or resource starvation.

“Diskeeper 16 with DRAM caching served over 50% of my reads from DRAM and eliminated over 30% of write traffic by preventing fragmentation. Now everything is more responsive!” - David Bruce, Managing Partner, David Bruce & Associates

“Diskeeper 16 with DRAM caching doubled our throughput, so we could backup in half the time.  Our Dell Rapid Recovery backup server is running smoother than ever.” - Curtis Jackson, Network Admin, School City of Hammond

“WOW! Watch it go! I have 44GB of memory in the physical server and Diskeeper is using around 20GB of it to cache!! I can’t imagine having a server without it! Diskeeper 16 is a vastly improved version of Diskeeper!” - Andy Vabulas, Vabulas Enterprises

“Our Symantec app running on a physical server has been notoriously slow for as long as I can remember, but since adding Diskeeper 16 it has improved significantly.” Josh Currier, Network Infrastructure Manager, Munters Corporation

 “With Diskeeper 16 I can tell my workstation is more responsive with no lag or any type of hesitation. Truly SMART Technology.” - William Krasulak, Systems/Network Admin, Nacci Printing, Inc.

“Our most I/O intensive applications on physical servers needed some help, so we installed Diskeeper 16 with DRAM caching and were amazed by the performance boost!” - Victor Grandmaiter, IT Director, Fort Bend Central Appraisal District

“Diskeeper eliminated 32% of my write traffic by preventing fragmentation and cached 64% of my read traffic within idle memory. This saved my workstation over 20 hours in I/O time after 24 days of testing!” - Lou Goodreau, IT Manager, New England Fishery

“Installed Diskeeper 16 on our worst performing physical servers running ERP with a SQL database and saw an immediate 50% boost!" - Hamid Bouhassoune, Systems Engineer, Global Skincare Company

A top New York clothing brand tried Diskeeper 16 with DRAM caching on their physical servers and saw backup times with Veeam and Backup Exec drop by more than half!

Before Diskeeper Install:

8/7, 10GB, 14MB/s, 1:38

8/8, 11 GB, 13MB/s, 1:54

After Diskeeper Install:

          8/12, 13GB, 21MB/s, 1:30

        8/13, 14GB, 30MB/s, 0:58

        8/14, 13GB, 33MB/s, 0:55

        8/15, 11GB, 36MB/s, 0:44

        8/19, 17GB, 30MB/s, 1:06

 

A Large Illinois Non-Profit tested Diskeeper 16 with DRAM caching on Windows 2012R2 physical servers running CRM and accounting software with a MS-SQL backend. Note – these improvements were almost exclusively from Diskeeper 16’s write optimization engine since idle memory was not available to initiate the new caching engine.

 

See a screenshot of the new dashboard reporting that shows “time saved” from using Diskeeper 16 to eliminate fragmentation and cache reads with idle DRAM.

 

Try Diskeeper 16 with DRAM caching for 30-days -> 

 

 

 

Teaser: Coming Soon! Intelligent Caching and Fragmentation Prevention = IO Heaven

by Brian Morin 19. September 2016 04:53

Sometimes the performance of physical servers, PCs and laptops slows to a crawl. No matter what you do, it takes half an eternity to open some files. It’s tied into the architecture of the Windows operating system. The OS becomes progressively slower the longer it is used and the more it is burdened with added software and large volumes of data.

In the old days, the solution was easy – defragment the hard drive. However, many production servers can’t be taken offline to defragment, and many laptops only have solid state drives (SSDs) that don’t submit to defragmentation. So is there any hope?

Condusiv has solved these dilemmas in the soon to be released version of Diskeeper®. With over 100 million licenses sold, Diskeeper has been the undisputed leader for decades when it comes to keeping Windows systems fragment free and performing well. And with Diskeeper 16 coming out soon, feedback from Beta testers is that it goes way beyond a mere incremental release with a few added frills, bells and whistles. Instead, the consensus among them is that it is a “next generation” release that goes well beyond just keeping Windows systems running like new but actually boosts performance faster than new.

How is this being achieved? The company had been perfecting two technologies within its portfolio and is now bringing them together – fragmentation prevention and DRAM caching.

On the one side, the idea is that you prevent fragmentation before data is written to a production server. This is a lifesaver for IT administrators who need to immediately boost the performance of critical applications like MS-SQL running on physical servers. Diskeeper keeps systems running optimally with its patented fragmentation prevention engine that ensures large, clean, contiguous writes from Windows, eliminating the small, tiny writes that rob performance with “death by a thousand cuts” by inflating IOPS and stealing throughput.

But that’s only the half of it.  A little known fact about Condusiv is that it is also a world leader in caching. In addition to their incredible work on Diskeeper, the Condusiv development team has evolved a unique DRAM caching approach that has been implemented via OEM partners for several years. So popular has this technology become that the company has sold over 5 million caching licenses that have been tied to ultrabooks but now is being made available commercially.

Soon to be released Diskeeper 16’s DRAM caching electrifies performance:

·         Benchmark tests show MS-SQL workload performance boosts of up to 6X

·         An average of 40% latency reduction across hundreds of servers

·         No hint of memory contention or resource starvation

·         Fleets of laptops suddenly running like a dream

·         PCMark MS Office productivity tests show an increase of 73% on Windows 10 machines

·         Huge leaps in SSD write speed and extended SSD lifespan

·         Solves even the worst performing physical servers or Windows PCs backed by a money-back guarantee.

Could it be, then, that there really is hope to get PCs and physicals servers to be running faster than new?

 

You’ll have to wait until Diskeeper 16 is unveiled to hear the full story. 

Great 5 Star Review from a Microsoft MVP

by Colleen Toumayan 12. April 2011 09:36

Diskeeper 2011 received a 5 star review on Bright Hub.

The reviewer states,

"It is recommended to defrag the hard disk to improve its performance. The built-in defrag tool in Windows is not enough if you prefer using a program that will provide more options and features. Diskeeper is offering the latest version with new features and improvements. Diskeeper 2011 will not only automate but also provide an Instant Defrag feature. You'll find out more in the next section of this Diskeeper 2011 review on what else to expect in the new version."

For the complete review read more: http://www.brighthub.com/computing/windows-platform/reviews/111500.aspx#ixzz1JLbwxrjD

Tags:

Diskeeper TV | HyperBoot | InvisiTasking

Do you need to defragment a Mac?

by Michael 2. February 2011 05:54

The purpose of this blog post is to provide some data about fragmentation on the Mac, that I've not seen researched/published elsewhere.

Mac OSX has a defragmenter in the file system itself. Given Mac is open-source, we looked at the code.

During a file open the files get defragmented if the following conditions are met:

1. The file is less than 20MB in size

2. There are more than 7 fragments

3. System has been up for more than 3 minutes

4. A regular file

5. File system is journaled

6. And the file system is not read-only.

So what's Apple's take on the subject? An Apple technical article states this:

Do I need to optimize?

You probably won't need to optimize at all if you use Mac OS X. Here's why:

  • Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
  • Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
  • Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 Panther can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
  • Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.

For these reasons, there is little benefit to defragmenting.

Note: Mac OS X systems use hundreds of thousands of small files, many of which are rarely accessed. Optimizing them can be a major effort for very little practical gain. There is also a chance that one of the files placed in the "hot band" for rapid reads during system startup might be moved during defragmentation, which would decrease performance.

If your disks are almost full, and you often modify or create large files (such as editing video, but see the Tip below if you use iMovie and Mac OS X 10.3), there's a chance the disks could be fragmented. In this case, you might benefit from defragmentation, which can be performed with some third-party disk utilities. 
 

Here is my take on that information:

While I have no problem with the lead-in which states probably, the reasons are theoretical. Expressing theory and then an opinion on that theory is fine, so long as you properly indicate it is an opinion. The problem I do have with this is the last sentence before the notation, "For these reasons, there is little benefit to defragmenting.", or more clearly; passing off theory as fact.

Theory, and therefore "reasons" need to be substantiated by actual scientific processes that apply the theory and then either validate or invalidate it. Common examples we hear of theory-as-fact are statements like "SSDs don't have moving parts and don't need to be defragmented". Given our primary business is large enterprise corporations, we hear a lot of theory about the need (or lack thereof) of defragmenting complex and expensive storage systems. In all those cases, testing proves fragmentation (files, free space or both) slows computers down. The reasons sound logical, which dupes readers/listeners into believing the statements are true.

On that note, while the first three are logical, the last "reason" is most likely wrong. Block-based read-ahead caching is predicated on files being sequentially located/interleaved on the same disk "tracks". File-based read-ahead would still have to issue additional I/Os due to fragmentation. Fragmentation of data essentially breaks read-ahead efforts. Could the Mac be predicting file access and pre-loading files into memory well in advance of use, sure. If that's the case I could agree with the last point (i.e. "perceived system performance), but I find this unlikely (anyone reading this is welcome to comment).

They do also qualify the reason by stating "minor fragmentation", to which I would add that that minor fragmentation on Windows may not have "perceived" impact either.

I do agree with the final statement that states "you might benefit from defragmentation" when using large files, although I think might is too indecisive.

Where my opinion comes from:

A few years ago (spring/summer of 2009) we did a research project to understand how much fragmentation existed on Apple Macs. We wrote and sent out a fragmentation/performance analysis tool to select customers who also had Macs at their homes/businesses. We collected data from 198 volumes on 82 Macs (OSX 10.4.x & 10.5.x). 30 of those systems were in use between 1 – 2 years. 

                               

While system specifics are confidential (testers provided us the data under non-disclosure agreements) we found that free space fragmentation was particularly bad in many cases (worse than Windows). We also found an average of a little over 26,000 fragments per Mac, with an average expected performance gain from defrag of about 8%.Our research also found that the more severe cases of fragmentation, where we saw 70k/100k+ fragments, were low on available free space (substantiating that last paragraph in the Apple tech article).

This article also provide some fragmentation studies as well as performance tests. Their data also validates Apple's last paragraph and makes the "might benefit" statement a bit understated.

Your Mileage May Vary (YMMV): 

So, in summary I would recommend defragmenting your Mac. As with Windows, the benefit from defragmenting is proportionate to the amount of fragmentation. Defrag will help. The question is "does defrag help enough to spend the time and money?". The good thing is most Mac defragmenters, just like Diskeeper for Windows, have free demo versions you can trial to see if its worth spending money.

 

Here are some options: 

+ iDefrag (used by a former Diskeeper Corp employee who did graphic design on a Mac)

+ Drive Genius suite (a company we have spoken with in the past)

+ Stellar Drive Defrag (relatively new)

Perhaps this article begs the question/rumor "will there be a Diskeeper for Mac?", to which I would answer "unlikely, but not impossible". The reason is that we already have a very full development schedule with opportunities in other areas that we plan to pursue.

We are keeping an "i" on it though ;-).

Month List

Calendar

<<  July 2017  >>
MoTuWeThFrSaSu
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar