Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Big Data Boom Brings Promises, Problems

by Dawn Richcreek 7. September 2018 04:40

By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed by this “data tsunami”? Will the sheer ability to manage—or even to physically house—all this information become a problem?

Condusiv CEO Jim D’Arezzo, in a recent discussion with Supply Chain Brain, commented that “As it has over the past 40 years, technology will become faster, cheaper, and more expansive; we’ll be able to store all the data we create. The challenge, however, is not just housing the data, but moving and processing it. The components are storage, computing, and network. All three need to be optimized; I don’t see any looming insurmountable problems, but there will be some bumps along the road.”

One example is healthcare. Speaking with Healthcare IT News, D’Arezzo noted that there are many new solutions open to healthcare providers today. “But with all the progress,” he said, “come IT issues. Improvements in medical imaging, for instance, create massive amounts of data; as the quantity of available data balloons, so does the need for processing capability.”

Giving health-care providers—and professionals in other areas—the benefits of the data they collect is not always easy. In an interview with Transforming Data with Intelligence, D’Arezzo said, “Data center consolidation and updating is a challenge. We run into cases where organizations do consolidation on a ‘forklift’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often discover that performance has degraded. A bottleneck has been created that needs to be handled with optimization.”

The news is all over it. You are experiencing it. Big data. Big problems. At Condusiv®, we get it.  We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware. The tsunami of data—we’ve got you covered.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

How to make NVMe storage even faster

by Spencer Allingham 4. September 2018 07:21

This is a blog to complement a vlog that I posted a few weeks ago, in which I demonstrated how to use the intelligent RAM caching technology found in the V-locity® software from Condusiv® Technologies to improve the performance that a computer can get from NVMe flash storage. You can view this video here:

 

 A question arose from a couple of long-term customers about whether the use of the V-locity software was still relevant if they started utilizing very fast, flash storage solutions. This was a fair question!

The V-locity software is designed to reduce the amount of unnecessary storage I/O traffic that actually has to go out and be processed by the underlying disk storage layer. It not only reduces the amount of I/O traffic, but it optimizes that which DOES have to go out to disk, and moreover, it further reduces the workload on the storage layer by employing a very intelligent RAM caching strategy.

So, given that flash storage, whilst not only becoming more prevalent in today’s compute environments, can process storage I/O traffic VERY fast when compared to its spinning disk counterparts, and is capable of processing more I/Os per Second (IOPS) than ever before, the very sensible question was this:


"Can the use of Condusiv's V-locity software provide a significant performance increase when using very fast flash storage?"


As I was fortunate to have recently implemented some flash storage in my workstation, I was keen to run an experiment to find out.


SPOILER ALERT: For those of you who just want to have

the question answered, the answer is a resounding YES!

The test showed beyond doubt that with Condusiv’s V-locity software installed, your Windows computer has the ability to process significantly more I/Os per Second, process a much higher throughput of data, and allow the storage I/O heavy workloads running in computers the opportunity to get significantly more work done in the same amount of time – even when using very fast flash storage.

 

For those of you true ‘techies’ that are as geeky as me, read on, and I will detail the testing methodology and results in more detail. 

The storage that I now had in my workstation (and am still happily using!) was a 1 terabyte SM961 POLARIS M.2-2280 PCI-E 3.0 X 4 NVMe solid state drive (SSD).

 

 Is it as fast as it’s made out to be? Well, in this engineer’s opinion – OMG YES!

 

It makes one hell of a difference, when compared to spinning disk drives. This is in part because it’s connected to the computer via a PCI Express (PCIe) bus, as opposed to a SATA bus. The bus is what you connect your disk to in the computer, and different types of buses have different capabilities, such as the speed at which data can be transferred. SATA-connected disks are significantly slower than today’s PCIe-connected storage using an NVMe device interface. There is a great Wiki about this here if you want to read more: 

https://en.wikipedia.org/wiki/NVM_Express

 

To give you an idea of the improvement though, consider that the Advanced Host Controller Interface (AHCI) that is used with the SATA connected disks has one command queue, in which it can process 32 commands. That’s up to 32 storage requests at a time, and that was okay for spinning disk technology, because the disks themselves could only cope with a certain number of storage requests at a time.

NVMe on the other hand doesn’t have one command queue, it has 65,535 queues. AND, each of those command queues can themselves accommodate 65,536 commands. That’s a lot more storage requests that can be processed at the same time! This is really important, because flash storage is capable of processing MANY more storage requests in parallel than its spinning disk cousins. Quite simply NVMe was needed to really make the most of what flash disk hardware can do. You wouldn’t put a kitchen tap (faucet) on the end of a fire hose and expect the same amount of water to flow through it, right? Same principle!

As you can probably tell, I’m quite excited by this boost in storage performance. (I’m strange like that!) And, I know I’m getting a little off topic (apologies), so back to the point!

I had this SUPER-FAST storage solution and needed to prove one way or another if Condusiv’s V-locity software could increase the ability of my computer to process even more workload.

Would my computer be able to process more storage I/Os per Second?

Would my computer be able to process a larger amount of storage I/O traffic (megabytes) every second?

 

Testing Methodology

To answer these questions, I took a virtual machine, and cloned it so that I had two virtual machines that were as identical as I could make them. I then installed Condusiv’s V-locity software on both and disabled V-locity on one of the machines, so that it would process storage I/O traffic, just as if V-locity wasn’t installed.

To generate a storage I/O traffic workload, I turned to my old friend IOMETER. For those of you who might not know IOMETER, this is a software utility originally designed by Intel, but is now open source and available at SourceForge.net. It is designed as an I/O subsystem measurement tool and is great for generating I/O workloads of different types (very customizable!), and measure how quickly that I/O workload can be processed. Great for testing networks or in this case, how fast you can process storage I/O traffic.

I configured IOMETER on both machines with the type of workload that one might find on a typical SQL database server. I KNOW, I know, there is no such thing as a ‘typical’ SQL database, but I wanted a storage I/O profile that was as meaningful as possible, rather than a workload that would just make V-locity look good. Here is the actual IOMETER configuration:

Worker 1 – 16 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Worker 2 – 64 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Test Results

V-locity Disabled

 

V-locity Enabled

 

Summary

 

 

Conclusion

 

In this lab test, the presence of V-locity reduced the average amount of time required to process storage I/O requests by around 65%, allowing a great amount of storage I/O requests to be processed per second and a greater amount of data to be transferred.

To prove beyond doubt that it was indeed V-locity that caused the additional storage I/O traffic to be processed, I stopped the V-locity service. This immediately ‘turned off’ all of the RAM caching and other optimization engines that V-locity was providing, and the net result was that the IOPS and throughput dropped to normal as the underlying storage had to start processing ALL of the storage traffic that IOMETER was generating.

What value is there to reducing storage I/O traffic?

The more you can reduce storage I/O traffic that has to go out and be processed by your disk storage, the more storage I/O headroom you are handing back to your environment for use by additional workloads. It means that your current disk storage can now cope with:

·       - More computers sharing the storage. Great if you have a Storage Area Network (SAN) underpinning your virtualized environment, for example. More VMs running!

 

·       - More users accessing and manipulating the shared storage. The more users you have, the more storage I/O traffic is likely to be generated.

·       - Greater CPU utilization. CPU speeds and processing capacity keeps increasing. Now that the processing power is typically much more than typical needs, V-locity can help your applications become more productive and use more of that processing power by not having to wait so much on the disk storage layer.

 

If you can achieve this without having to replace or upgrade your storage hardware, it not only increases the return on your current storage hardware investment, but also might allow you to keep that storage running for a longer period of time (if you’re not on a fixed refresh cycle).

Sweat the storage asset!

(I hate that term, but you get the idea)

When you do finally need to replace your current storage, perhaps it won’t be as costly as you thought because you’re not having to OVER-PROVISION the storage as much, to cope with all of the excess, unnecessary storage traffic that Condusiv’s V-locity software can eliminate.

I typically see a storage traffic reduction of at least 25% at customer sites.

AND, I haven’t even mentioned the performance boost that many workloads receive from the RAM caching technology provided by Condusiv’s V-locity software. It is worth remembering that as fast as today’s flash storage solutions are, the RAM that you have in your computers is faster! The greater the percentage of read I/O traffic that you can satisfy from RAM instead of the storage layer, the better performing those storage I/O-hungry applications are likely to be.

What type of applications benefit the most?

In the real world, V-locity is not a silver-bullet for all types of workloads, and I wouldn’t insult your intelligence by saying that it was. If you have some workloads that don’t generate a great deal of storage I/O traffic, perhaps a DNS server, or DHCP server, well, V-locity isn’t likely to make a huge difference. That’s my honest opinion as an IT Engineer.

HOWEVER, if you are using storage I/O-hungry applications, then you really should give it a try.

Here are just some examples of the workloads that thousands of V-locity customers are ‘performance-boosting’ with Condusiv’s I/O reduction and RAM caching technologies:

  • -Database solutions such as Microsoft SQL Server, Oracle, MySQL, SQL Express, and others.
  • -Virtualization solution such as Microsoft Hyper-V and VMware.
  • -Enterprise Resource Planning (ERP) solutions like Epicor.
  • -Business Intelligence (BI) solutions like IBM Cognos.
  • -Finance and payroll solutions like SAGE Accounting.
  • -Electronic Health Records (EHR) solutions, such as MEDITECH 
  • -Customer Relationship Management (CRM) solutions, such as Microsoft Dynamics.
  • -Learning Management Systems (LMS Solutions.
  • -Not to mention email servers like Microsoft Exchange AND busy file servers.

 

 

Do you use any of these in your IT environment?

 

There are case studies on the Condusiv web site for all of these workload types (and more), here:

http://www.condusiv.com/knowledge-center/case-studies/default.aspx

 

Try it for yourself

You can experience the full power of Condusiv’s V-locity software for yourself, in YOUR Windows environment within a couple of minutes. Just go to www.condusiv.com/try and get a copy of the fully-featured 30-day trialware. You can check the dashboard reporting after a week or two and see just how much storage I/O traffic has been eliminated, and more importantly, how much storage time has been saved by doing do.

It really is that simple!

You don’t even need to reboot to make the software work. There is no disruption to live running workloads; you can just install and uninstall at will, and it only takes a minute or so.


You will typically start seeing results just minutes after installing.

I hope that this has been interesting and helpful. If you have any questions about the technologies within V-locity or have any questions about testing, feel free to email me directly at sallingham@condusiv.co.uk.

 

I will be delighted to hear from you!

 

 

Financial Sector Battered by Rising Compliance Costs

by Dawn Richcreek 15. August 2018 08:39

Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry.

All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture survey of compliance officers in 13 countries said they expected 10% to 20% increases, and nearly one in five are expecting increases of more than 20%.

Much of this is driven by international banking regulations. At the beginning of this year, the Common Reporting Standard went into effect. An anti-tax-evasion measure signed by 142 countries, the CRS requires financial institutions to provide detailed account information to the home governments of virtually every sizeable depositor.

Just to keep things exciting, the U.S. government hasn’t signed on to CRS; instead we require banks doing business with Americans to comply with the Foreign Account Tax Compliance Act of 2010. Which requires—surprise, surprise—pretty much the same thing as CRS, but reported differently.

And these are just two examples of the compliance burden the financial sector must deal with. Efficiently, and within a budget. In a recent interview by ValueWalk entitled “Compliance Costs Soaring for Financial Institutions,” Condusiv® CEO Jim D’Arezzo said, “Financial firms must find a path to more sustainable compliance costs.”

Speaking to the site’s audience (ValueWalk is a site focused on hedge funds, large asset managers, and value investing) D’Arezzo noted that finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT, more than government, healthcare, retail, or anybody else. It’s also an outlier in terms of IT staff load; on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. (Government averages 37.8 users per IT staff employee.)

To ease these difficulties, D’Arezzo recommends that the financial industry consider advanced technologies that provide cost-effective ways to enhance overall system performance. “The only way financial services companies will be able to meet the compliance demands being placed on them, and at the same time meet their efficiency and profitability targets, will be to improve the efficiency of their existing capacity—especially as regards I/O reduction.”

At Condusiv, that’s our business. We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

 

For an explanation of why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

 

Windows is still Windows Whether in the Cloud, on Hyperconverged or All-flash

by Brian Morin 5. June 2018 04:43

Let me start by stating two facts – facts that I will substantiate if you continue to the end.

Fact #1 - Windows suffers from severe write inefficiencies that dampen overall performance. The holy grail question as to how severe is answered below.

Fact #2, Windows is still Windows whether running in the cloud, on hyperconverged systems, all-flash storage, or all three. Before you jump to the real-world examples below, let me first explain why.

No matter where you run Windows and no matter what kind of storage environment you run Windows on, Windows still penalizes optimal performance due to severe write inefficiencies in the hand-off of data to storage. Files are always broken down to be excessively smaller than they need to be. Since each piece means a dedicated I/O operation to process as a write or read, this means an enormous amount of noisy, unnecessary I/O traffic is chewing up precious IOPS, eroding throughput, and causing everything to run slower despite how many IOPS are at your disposal.

How much slower?

Now that the latest version of our I/O reduction software is being run across tens of thousands of servers and hundreds of thousands of PCs, we can empirically point out that no matter what kind of environment Windows is running on, there is always 30-40% of I/O traffic that is nothing but mere noise stealing resources and robbing optimal performance.

Yes, there are edge cases in which the inefficiency is as little as 10% but also other edge cases where the inefficiency is upwards of 70%. That being said, the median range is solidly in the 30-40% range and it has absolutely nothing to do with the backend media whether spindle, flash, hybrid, hyperconverged, cloud, or local storage.

Even if running Windows on an all-flash hyperconverged system, SAN or cloud environment with low latency and high IOPS, if the I/O profile isn’t addressed by our I/O reduction software to ensure large, clean, contiguous writes and reads, then 30-40% more IOPS will always be required for any given workload, which adds up to unnecessarily giving away 30-40% of the IOPS you paid for while slowing the completion of every job and query by the same amount.

So what’s going on here? Why is this happening and how?

First of all, the behavior of Windows when it comes to processing write and read input/output (I/O) operations is identical despite the storage backend whether local or network or media despite spindles or flash. This is because Windows only ever sees a virtual disk - the logical disk within the file system itself. The OS is abstracted from the physical layer entirely. Windows doesn’t know and doesn’t care if the underlying storage is a local disk or SSD, an array full of SSDs, hyperconverged, or cloud. In the mind of the OS, the logical disk IS the physical disk when, in fact, it’s just a reference architecture. In the case of enterprise storage, the underlying storage controllers manage where the data physically lives. However, no storage device can dictate to Windows how to write (and subsequently read) in the most efficient manner possible.

This is why many enterprise storage controllers have their own proprietary algorithms to “clean up” the mess Windows gives it by either buffering or coalescing files on a dedicated SSD or NVRAM tier or physically move pieces of the same file to line up sequentially, which does nothing for the first penalized write nor several penalized reads after as the algorithm first needs to identify a continued pattern before moving blocks. As much as storage controller optimization helps, it’s a far cry from an actual solution because it doesn’t solve the source of the larger root cause problem - even with backend storage controller optimizations, Windows will still make the underlying server to storage architecture execute many more I/O operations than are required to write and subsequently read a file, and every extra I/O required takes a measure of time in the same way that four partially loaded dump trucks will take longer to deliver the full load versus one fully loaded dump truck. It bears repeating - no storage device can dictate to Windows how to best write and read files for the healthiest I/O profile that delivers optimum performance because only Windows controls how files are written to the logical disk. And that singular action is what determines the I/O density (or lack of) from server to storage.

The reason this is occurring is because there are no APIs that exist between the Windows OS and underlying storage system whereby free space at the logical layer can be intelligently synced and consolidated with the physical layer without change block movement that would otherwise wear out SSDs and trigger copy-on-write activity that would blow up storage services like replication, thin provisioning, and more.

This means Windows has no choice but to choose the next available allocation at the logical disk layer within the file systems itself instead of choosing the BEST allocation to write and subsequently read a file.

The problem is that the next available allocation is only ever the right size on day 1 on a freshly formatted NTFS volume. But as time goes on and files are written and erased and re-written and extended and many temporary files are quickly created and erased, that means the next available space is never the right size. So, when Windows is trying to write a 1MB file but the next available allocation at the logical disk layer is 4K, it will fill that 4K, split the file, generate another I/O operation, look for the next available allocation, fill, split, and rinse and repeat until the file is fully written, and your I/O profile is cluttered with split I/Os. The result is an I/O degradation of excessively small writes and reads that penalizes performance with a “death by a thousand cuts” scenario.

It’s for this reason, over 2,500 small, midsized, and large enterprises have deployed our I/O reduction software to eliminate all that noisy I/O robbing performance by addressing the root cause problem. Since Condusiv software sits at the storage driver level, our purview is able to supply patented intelligence to the Windows OS, enabling it to choose the BEST allocation for any file instead of the next available, which is never the right size. This ensures the healthiest I/O profile possible for maximum storage performance on every write and read. Above and beyond that benefit, our DRAM read caching engine (the same engine OEM’d by 9 of the top 10 PC manufacturers), eliminates hot reads from traversing the full stack from storage by serving it straight from idle, available DRAM. Customers who add anywhere to 4GB-16GB of memory to key systems with a read bias to get more from that engine, will offload 50-80% of all reads from storage, saving even more precious storage IOPS while serving from DRAM which is 15X faster than SSD. Those who need the most performance possible or simply need to free up more storage IOPS will max our 128GB threshold and offload 90-99% of reads from storage.

Let’s look at some real-world examples from customers.

Here is VDI in AWS shared by Curt Hapner (CIO, Altenloh Brinck & Co.). 63% of read traffic is being offloaded from underlying storage and 33% of write I/O operations. He was getting sluggish VDI performance, so he bumped up memory slightly on all instances to get more power from our software and the sluggishness disappeared.

Here is an Epicor ERP with SQL backend in AWS from Altenloh Brinck & Co. 39% of reads are being eliminated along with 44% of writes to boost the performance and efficiency of their most mission critical system.

 

Here’s from one of the largest federal branches in Washington running Windows servers on an all-flash Nutanix. 45% of reads are being offloaded and 38% of write traffic.

 

Here is a spreadsheet compilation of different systems from one of the largest hospitality and event companies in Europe who run their workloads in Azure. The extraction of the dashboard data into the CSV shows not just the percentage of read and write traffic offloaded from storage but how much I/O capacity our software is handing back to their Azure instances.

 

To illustrate we use the software here at Condusiv on our own systems, this dashboard screenshot is from our own Chief Architect (Rick Cadruvi), who uses Diskeeper on his SSD-powered PC. You can see him share his own production data in the recent “live demo” webinar on V-locity 7.0 - https://youtu.be/Zn2QGxBHUzs

As you can see, 50% of reads are offloaded from his local SSD while 42% of writes operations have been saved by displacing small, fractured files with large, clean contiguous files. Not only is that extending the life of his SSD by reducing write amplification, but he has saved over 6 days of I/O time in the last month.

 

Finally, regarding all-flash SAN storage systems, the full data is in this case study with the University of Illinois who used Condusiv I/O reduction software to more than double the performance of SQL and Oracle sitting on their all-flash arrays: http://learn.condusiv.com/rs/246-QKS-770/images/CS_University-Illinois.pdf?utm_campaign=CS_UnivIll_Case_Study

For a free trial, visit http://learn.condusiv.com/Try-V-locity.html. For best results, bump up memory on key systems if you can and make sure to install the software on all the VMs on the same host. If you have more than 10 VMs, you may want to Contact Us for SE assistance in spinning up our centralized management console to push everything at once – a 20-min exercise and no reboot required.

Please visit www.condusiv.com/v-locity for more than 20 case studies on how our I/O reduction software doubled the performance of mission critical applications like MS-SQL for customers of various environments.

New! Diskeeper 16 Guarantees “Faster than New” Performance for Physical Servers and PCs

by Brian Morin 26. September 2016 09:56

The world’s most popular defragmentation software for physical servers and PCs makes “defrag” a thing of the past and delivers “faster than new” performance by dynamically caching hot reads with idle DRAM.  As a result, Diskeeper® 16 guarantees to solve the toughest application performance issues on physical servers like MS-SQL and guarantees to fix sluggish PCs with faster than new performance or your money back for 90 days – no questions asked.

The market is still catching up to the fact that Diskeeper’s newest patented engine no longer “defrags” but rather proactively eliminates fragmentation with large, sequential writes from Windows to underlying HDDs, SSDs, and SAN storage systems. This eliminates the “death by a thousand cuts” scenario of small, tiny writes and reads that inflates I/Os per second, robs throughput, and shortens the lifespan of HDDs and SSDs alike. However, the biggest new announcement has to do with the addition of DRAM caching – putting idle DRAM to good use by serving hot reads without memory contention or resource starvation.

“Diskeeper 16 with DRAM caching served over 50% of my reads from DRAM and eliminated over 30% of write traffic by preventing fragmentation. Now everything is more responsive!” - David Bruce, Managing Partner, David Bruce & Associates

“Diskeeper 16 with DRAM caching doubled our throughput, so we could backup in half the time.  Our Dell Rapid Recovery backup server is running smoother than ever.” - Curtis Jackson, Network Admin, School City of Hammond

“WOW! Watch it go! I have 44GB of memory in the physical server and Diskeeper is using around 20GB of it to cache!! I can’t imagine having a server without it! Diskeeper 16 is a vastly improved version of Diskeeper!” - Andy Vabulas, Vabulas Enterprises

“Our Symantec app running on a physical server has been notoriously slow for as long as I can remember, but since adding Diskeeper 16 it has improved significantly.” Josh Currier, Network Infrastructure Manager, Munters Corporation

 “With Diskeeper 16 I can tell my workstation is more responsive with no lag or any type of hesitation. Truly SMART Technology.” - William Krasulak, Systems/Network Admin, Nacci Printing, Inc.

“Our most I/O intensive applications on physical servers needed some help, so we installed Diskeeper 16 with DRAM caching and were amazed by the performance boost!” - Victor Grandmaiter, IT Director, Fort Bend Central Appraisal District

“Diskeeper eliminated 32% of my write traffic by preventing fragmentation and cached 64% of my read traffic within idle memory. This saved my workstation over 20 hours in I/O time after 24 days of testing!” - Lou Goodreau, IT Manager, New England Fishery

“Installed Diskeeper 16 on our worst performing physical servers running ERP with a SQL database and saw an immediate 50% boost!" - Hamid Bouhassoune, Systems Engineer, Global Skincare Company

A top New York clothing brand tried Diskeeper 16 with DRAM caching on their physical servers and saw backup times with Veeam and Backup Exec drop by more than half!

Before Diskeeper Install:

8/7, 10GB, 14MB/s, 1:38

8/8, 11 GB, 13MB/s, 1:54

After Diskeeper Install:

          8/12, 13GB, 21MB/s, 1:30

        8/13, 14GB, 30MB/s, 0:58

        8/14, 13GB, 33MB/s, 0:55

        8/15, 11GB, 36MB/s, 0:44

        8/19, 17GB, 30MB/s, 1:06

 

A Large Illinois Non-Profit tested Diskeeper 16 with DRAM caching on Windows 2012R2 physical servers running CRM and accounting software with a MS-SQL backend. Note – these improvements were almost exclusively from Diskeeper 16’s write optimization engine since idle memory was not available to initiate the new caching engine.

 

See a screenshot of the new dashboard reporting that shows “time saved” from using Diskeeper 16 to eliminate fragmentation and cache reads with idle DRAM.

 

Try Diskeeper 16 with DRAM caching for 30-days -> 

 

 

 

RecentComments

Comment RSS

Month List

Calendar

<<  October 2018  >>
MoTuWeThFrSaSu
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

View posts in large calendar