Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

How to make NVMe storage even faster

by Spencer Allingham 4. September 2018 07:21

This is a blog to complement a vlog that I posted a few weeks ago, in which I demonstrated how to use the intelligent RAM caching technology found in the V-locity® software from Condusiv® Technologies to improve the performance that a computer can get from NVMe flash storage. You can view this video here:

 

 A question arose from a couple of long-term customers about whether the use of the V-locity software was still relevant if they started utilizing very fast, flash storage solutions. This was a fair question!

The V-locity software is designed to reduce the amount of unnecessary storage I/O traffic that actually has to go out and be processed by the underlying disk storage layer. It not only reduces the amount of I/O traffic, but it optimizes that which DOES have to go out to disk, and moreover, it further reduces the workload on the storage layer by employing a very intelligent RAM caching strategy.

So, given that flash storage, whilst not only becoming more prevalent in today’s compute environments, can process storage I/O traffic VERY fast when compared to its spinning disk counterparts, and is capable of processing more I/Os per Second (IOPS) than ever before, the very sensible question was this:


"Can the use of Condusiv's V-locity software provide a significant performance increase when using very fast flash storage?"


As I was fortunate to have recently implemented some flash storage in my workstation, I was keen to run an experiment to find out.


SPOILER ALERT: For those of you who just want to have

the question answered, the answer is a resounding YES!

The test showed beyond doubt that with Condusiv’s V-locity software installed, your Windows computer has the ability to process significantly more I/Os per Second, process a much higher throughput of data, and allow the storage I/O heavy workloads running in computers the opportunity to get significantly more work done in the same amount of time – even when using very fast flash storage.

 

For those of you true ‘techies’ that are as geeky as me, read on, and I will detail the testing methodology and results in more detail. 

The storage that I now had in my workstation (and am still happily using!) was a 1 terabyte SM961 POLARIS M.2-2280 PCI-E 3.0 X 4 NVMe solid state drive (SSD).

 

 Is it as fast as it’s made out to be? Well, in this engineer’s opinion – OMG YES!

 

It makes one hell of a difference, when compared to spinning disk drives. This is in part because it’s connected to the computer via a PCI Express (PCIe) bus, as opposed to a SATA bus. The bus is what you connect your disk to in the computer, and different types of buses have different capabilities, such as the speed at which data can be transferred. SATA-connected disks are significantly slower than today’s PCIe-connected storage using an NVMe device interface. There is a great Wiki about this here if you want to read more: 

https://en.wikipedia.org/wiki/NVM_Express

 

To give you an idea of the improvement though, consider that the Advanced Host Controller Interface (AHCI) that is used with the SATA connected disks has one command queue, in which it can process 32 commands. That’s up to 32 storage requests at a time, and that was okay for spinning disk technology, because the disks themselves could only cope with a certain number of storage requests at a time.

NVMe on the other hand doesn’t have one command queue, it has 65,535 queues. AND, each of those command queues can themselves accommodate 65,536 commands. That’s a lot more storage requests that can be processed at the same time! This is really important, because flash storage is capable of processing MANY more storage requests in parallel than its spinning disk cousins. Quite simply NVMe was needed to really make the most of what flash disk hardware can do. You wouldn’t put a kitchen tap (faucet) on the end of a fire hose and expect the same amount of water to flow through it, right? Same principle!

As you can probably tell, I’m quite excited by this boost in storage performance. (I’m strange like that!) And, I know I’m getting a little off topic (apologies), so back to the point!

I had this SUPER-FAST storage solution and needed to prove one way or another if Condusiv’s V-locity software could increase the ability of my computer to process even more workload.

Would my computer be able to process more storage I/Os per Second?

Would my computer be able to process a larger amount of storage I/O traffic (megabytes) every second?

 

Testing Methodology

To answer these questions, I took a virtual machine, and cloned it so that I had two virtual machines that were as identical as I could make them. I then installed Condusiv’s V-locity software on both and disabled V-locity on one of the machines, so that it would process storage I/O traffic, just as if V-locity wasn’t installed.

To generate a storage I/O traffic workload, I turned to my old friend IOMETER. For those of you who might not know IOMETER, this is a software utility originally designed by Intel, but is now open source and available at SourceForge.net. It is designed as an I/O subsystem measurement tool and is great for generating I/O workloads of different types (very customizable!), and measure how quickly that I/O workload can be processed. Great for testing networks or in this case, how fast you can process storage I/O traffic.

I configured IOMETER on both machines with the type of workload that one might find on a typical SQL database server. I KNOW, I know, there is no such thing as a ‘typical’ SQL database, but I wanted a storage I/O profile that was as meaningful as possible, rather than a workload that would just make V-locity look good. Here is the actual IOMETER configuration:

Worker 1 – 16 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Worker 2 – 64 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Test Results

V-locity Disabled

 

V-locity Enabled

 

Summary

 

 

Conclusion

 

In this lab test, the presence of V-locity reduced the average amount of time required to process storage I/O requests by around 65%, allowing a great amount of storage I/O requests to be processed per second and a greater amount of data to be transferred.

To prove beyond doubt that it was indeed V-locity that caused the additional storage I/O traffic to be processed, I stopped the V-locity service. This immediately ‘turned off’ all of the RAM caching and other optimization engines that V-locity was providing, and the net result was that the IOPS and throughput dropped to normal as the underlying storage had to start processing ALL of the storage traffic that IOMETER was generating.

What value is there to reducing storage I/O traffic?

The more you can reduce storage I/O traffic that has to go out and be processed by your disk storage, the more storage I/O headroom you are handing back to your environment for use by additional workloads. It means that your current disk storage can now cope with:

·       - More computers sharing the storage. Great if you have a Storage Area Network (SAN) underpinning your virtualized environment, for example. More VMs running!

 

·       - More users accessing and manipulating the shared storage. The more users you have, the more storage I/O traffic is likely to be generated.

·       - Greater CPU utilization. CPU speeds and processing capacity keeps increasing. Now that the processing power is typically much more than typical needs, V-locity can help your applications become more productive and use more of that processing power by not having to wait so much on the disk storage layer.

 

If you can achieve this without having to replace or upgrade your storage hardware, it not only increases the return on your current storage hardware investment, but also might allow you to keep that storage running for a longer period of time (if you’re not on a fixed refresh cycle).

Sweat the storage asset!

(I hate that term, but you get the idea)

When you do finally need to replace your current storage, perhaps it won’t be as costly as you thought because you’re not having to OVER-PROVISION the storage as much, to cope with all of the excess, unnecessary storage traffic that Condusiv’s V-locity software can eliminate.

I typically see a storage traffic reduction of at least 25% at customer sites.

AND, I haven’t even mentioned the performance boost that many workloads receive from the RAM caching technology provided by Condusiv’s V-locity software. It is worth remembering that as fast as today’s flash storage solutions are, the RAM that you have in your computers is faster! The greater the percentage of read I/O traffic that you can satisfy from RAM instead of the storage layer, the better performing those storage I/O-hungry applications are likely to be.

What type of applications benefit the most?

In the real world, V-locity is not a silver-bullet for all types of workloads, and I wouldn’t insult your intelligence by saying that it was. If you have some workloads that don’t generate a great deal of storage I/O traffic, perhaps a DNS server, or DHCP server, well, V-locity isn’t likely to make a huge difference. That’s my honest opinion as an IT Engineer.

HOWEVER, if you are using storage I/O-hungry applications, then you really should give it a try.

Here are just some examples of the workloads that thousands of V-locity customers are ‘performance-boosting’ with Condusiv’s I/O reduction and RAM caching technologies:

  • -Database solutions such as Microsoft SQL Server, Oracle, MySQL, SQL Express, and others.
  • -Virtualization solution such as Microsoft Hyper-V and VMware.
  • -Enterprise Resource Planning (ERP) solutions like Epicor.
  • -Business Intelligence (BI) solutions like IBM Cognos.
  • -Finance and payroll solutions like SAGE Accounting.
  • -Electronic Health Records (EHR) solutions, such as MEDITECH 
  • -Customer Relationship Management (CRM) solutions, such as Microsoft Dynamics.
  • -Learning Management Systems (LMS Solutions.
  • -Not to mention email servers like Microsoft Exchange AND busy file servers.

 

 

Do you use any of these in your IT environment?

 

There are case studies on the Condusiv web site for all of these workload types (and more), here:

http://www.condusiv.com/knowledge-center/case-studies/default.aspx

 

Try it for yourself

You can experience the full power of Condusiv’s V-locity software for yourself, in YOUR Windows environment within a couple of minutes. Just go to www.condusiv.com/try and get a copy of the fully-featured 30-day trialware. You can check the dashboard reporting after a week or two and see just how much storage I/O traffic has been eliminated, and more importantly, how much storage time has been saved by doing do.

It really is that simple!

You don’t even need to reboot to make the software work. There is no disruption to live running workloads; you can just install and uninstall at will, and it only takes a minute or so.


You will typically start seeing results just minutes after installing.

I hope that this has been interesting and helpful. If you have any questions about the technologies within V-locity or have any questions about testing, feel free to email me directly at sallingham@condusiv.co.uk.

 

I will be delighted to hear from you!

 

 

Financial Sector Battered by Rising Compliance Costs

by Dawn Richcreek 15. August 2018 08:39

Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry.

All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture survey of compliance officers in 13 countries said they expected 10% to 20% increases, and nearly one in five are expecting increases of more than 20%.

Much of this is driven by international banking regulations. At the beginning of this year, the Common Reporting Standard went into effect. An anti-tax-evasion measure signed by 142 countries, the CRS requires financial institutions to provide detailed account information to the home governments of virtually every sizeable depositor.

Just to keep things exciting, the U.S. government hasn’t signed on to CRS; instead we require banks doing business with Americans to comply with the Foreign Account Tax Compliance Act of 2010. Which requires—surprise, surprise—pretty much the same thing as CRS, but reported differently.

And these are just two examples of the compliance burden the financial sector must deal with. Efficiently, and within a budget. In a recent interview by ValueWalk entitled “Compliance Costs Soaring for Financial Institutions,” Condusiv® CEO Jim D’Arezzo said, “Financial firms must find a path to more sustainable compliance costs.”

Speaking to the site’s audience (ValueWalk is a site focused on hedge funds, large asset managers, and value investing) D’Arezzo noted that finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT, more than government, healthcare, retail, or anybody else. It’s also an outlier in terms of IT staff load; on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. (Government averages 37.8 users per IT staff employee.)

To ease these difficulties, D’Arezzo recommends that the financial industry consider advanced technologies that provide cost-effective ways to enhance overall system performance. “The only way financial services companies will be able to meet the compliance demands being placed on them, and at the same time meet their efficiency and profitability targets, will be to improve the efficiency of their existing capacity—especially as regards I/O reduction.”

At Condusiv, that’s our business. We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

 

For an explanation of why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

 

A Deep Dive Into The I/O Performance Dashboard

by Howard Butler 2. August 2018 08:36

While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity.  The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams. 

By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days.  The data displayed will automatically be updated to reflect the time frame selected.

 

The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).

While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure. 

IOPS is the number of I/Os completed in 1 second of time.  This is a measurement of both read and write I/O operations.  MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system.  Taken together they represent speed and throughput efficiency.  One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level.  This is where the rubber meets the road.  Each I/O that passes through the Windows storage driver has a start and completion time stamp.  The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing.  Measurements at the storage device do not account for network, host, and hypervisor congestion.  Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency.  In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes.  This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.

The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache. 

 

A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.

Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache.  The purpose of our patented read caching technology is twofold.  Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache.  So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms.  Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix.  Eliminating I/O traffic from going to storage is what it’s all about.  The fewer I/Os to storage, the faster and more data your applications will be able to access.

In addition, we show the average, minimum, and maximum values for free memory used by the cache.  For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free).  The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit.  The memory values will be displayed in red if the Total Memory is less than 3GB.

Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.

Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.

It should be noted the IntelliMemory read cache is dynamic and self-learning.  This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache.  IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use.  As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage.  There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache.  The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic. 

So, there you have it.  I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.

You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try

Condusiv Launches SSDkeeper Software that Guarantees “Faster than New” Performance for PCs and Physical Servers and Extends Longevity of SSDs

by Brian Morin 17. January 2017 09:30

The company that sold over 100 Million Diskeeper® licenses for hard disk drive systems, now releases SSDkeeper™ to keep solid-state drive systems running longer while performing “faster than new.”

Every Windows PC or physical server fitted with a solid-state drive (SSD) suffers from very small, fractured writes and reads, which dampen optimal SSD performance and ultimately erodes the longevity of SSDs from write amplification issues. SSDkeeper’s patented software ensures large, clean contiguous writes and reads for more payload with every I/O operation, reduced Program/Erase (P/E) cycles that shorten SSD longevity, and boosts performance even further with its ability to cache hot reads within idle, available DRAM.

Solid-state drives can only handle a number of finite writes before failing. Every write kicks off P/E cycles that shorten SSD lifespan otherwise known as write amplification. By reducing the number of writes required for any given file or workload, SSDkeeper significantly boosts write performance speed while also reducing the number of P/E cycles that would have otherwise been executed. This enables individuals and organizations to reclaim the write speed of their SSD drives while ensuring the longest life possible.

Patented Write Optimization

SSDkeeper’s patented write optimization engine (IntelliWrite®) prevents excessively small, fragmented writes and reads that rob the performance and endurance of SSDs. SSDkeeper ensures large, clean contiguous writes from Windows, so maximum payload is carried with every I/O operation. By eliminating the “death by a thousand cuts” scenario of many, tiny writes and reads that slow system performance, the lifespan of an SSD is also extended due to reduction in write amplification issues that plague all SSD devices.

Patented Read Optimization

SSDkeeper electrifies Windows system performance further with an additional patented feature - dynamic memory caching (IntelliMemory®). By automatically using idle, available DRAM to serve hot reads, data is served from memory which is 12-15X faster than SSD and further reduces wear to the SSD device. The real genius in SSDkeeper’s DRAM caching engine is that nothing has to be allocated for cache. All caching occurs automatically. SSDkeeper dynamically uses only the memory that is available at any given moment and throttles according to the need of the application, so there is never an issue of resource contention or memory starvation. If a system is ever memory constrained at any point, SSDkeeper's caching engine will back off entirely. However, systems with just 4GB of available DRAM commonly serve 50% of read traffic. It doesn't take much available memory to have a big impact on performance.

Enhanced Reporting

If you ever wanted to know how much Windows inefficiencies were robbing system performance, SSDkeeper tracks time saved due to elimination of small, fragmented writes and time saved from every read request that is served from DRAM instead of being served from the underlying SSD. Users can leverage SSDkeeper’s built-in dashboard to see what percentage of all write requests are reduced by sequentializing otherwise small, fractured writes and what percentage of all read requests are cached from idle, available DRAM.

SSDkeeper is a lightweight file system driver that runs invisibly in the background with near-zero intrusion on system resources. All optimizations occur automatically in real-time.

While SSDkeeper provides the same core patented functionality and features as the latest Diskeeper® 16 for hard disk drives (minus defragmentation functions for hard disk drives only), the benefit to a solid-state drive is different than to a hard disk drive. Hard disk drives do not suffer from write amplification that reduces longevity. By eliminating excessively small writes, IntelliWrite goes beyond improved write performance but extends endurance as well.

Available in Professional and Server Editions

>SSDkeeper Professional for Windows PCs with SSD drives greatly enhances the performance of corporate laptops and desktops.

>SSDkeeper Server speeds physical server system performance of the most I/O intensive applications such as MS-SQL Server by 2X to 10X depending on the amount of idle, unused memory.  

>Options include Diskeeper Administrator management console to automate network deployment and management across hundreds or thousands of PCs or servers.  

>A free 30-day software trial download is available at http://www.condusiv.com/evaluation-software/

>Now available for purchase on our online store:  http://www.condusiv.com/purchase/SSDKeeper/

 

Overview of How We Derive Storage I/O Time Saved

by Rick Cadruvi, Chief Architect 11. January 2017 01:00

The latest versions of V-locity® (for virtual servers) and Diskeeper® (for physical servers and PCs) both contain built-in dashboards that show the exact benefit of the product to any one system or group of systems by showing how much and what percentage of read/write traffic is offloaded from storage and how much “I/O Time” that saves.

To understand the computation on “I/O Time Saved,” in its simplest form, the formula is essentially:

       Storage I/O Time Saved = Total I/Os Eliminated * Average I/O Response Time

In essence, if you take Total I/Os Eliminated from the dashboard Benefits screen and multiply it times the average latency from the I/O Performance dashboard screen, you will generally end up in the ballpark of the “I/O Time Saved.”

I/O counts and I/O times are accumulated on a per I/O basis. Every I/O that goes to storage is timed using Windows High Performance Counters for accuracy.  That timing is from when the I/O is sent down the stack until it comes back up. In essence we time I/O response time (IORT) or latency that the application sees, not the storage device.  We also track reads and writes separately as they impact the storage “I/O Time Saved” differently.

The data is accumulated and calculated during periods of time rather than across the entire reporting period. In the long term, that period of time ends up being hourly. Very active I/O periods will have longer IORTs and therefore the amount of I/O storage time saved per I/O eliminated will likely be greater than during relatively light periods. 

If there is a high queue depth, the IORT we time will be larger than the per I/O storage IORT.  We look at the effective IORT the application would see rather than the time the underlying storage takes to process any single I/O.  After all, the user only cares about how long the application took to process an I/O he/she requested, not how long a HDD or SSD took for any single I/O when it got around to processing it.

Let’s talk for a moment about storage “I/O Time Saved” versus clock time because they are not the same and our technologies can, in some cases, save far more storage I/O time than clock time.

If all storage I/O was sequential for the entire instance of the operating system, then the maximum amount of storage “I/O Time Saved” would be the amount of time since installation, and you would expect it to be considerably less as we are unlikely to eliminate ALL I/Os. And you might expect some idle time. Of course, applications do not do pure sequential I/O.  Modern applications are almost always multi-threaded and most computer systems are running multiple applications or instances of them at the same time.  Also, other operations are happening on the system outside of the primary application.  Think of Outlook running in the background while you do some other work on your system. Outlook is constantly receiving updated data.  Windows is also processing lots of I/Os in the background just for it to be able to continue operations.  These I/Os happen in parallel to any I/Os that users may be doing with an application.

In general, there are lots of I/Os that are being processed at the same time.  You would not want to work on a computer system where only a single I/O was being processed at any one point in time as it would be VERY slow.  If the average queue depth would have been 5 without us but 2 with us, that means every time 2 I/Os go through to storage, we would have eliminated 3 I/Os.  The end result would be a storage “I/O Time Saved” of somewhere between 1.5-3x clock time, depending on how the underlying storage processed the I/Os. 

Another factor that contributes to the possibility of storage “I/O Time Saved” exceeding of clock time is the reduction of split I/Os.  Let’s say that without our product all I/Os actually end up being split into 3 I/Os due to Windows writing files in an excessively small, fragmented manner.  After installing our product, by displacing small, tiny writes with large, contiguous writes, each of those I/Os that had to be split into 3 are now being completed as a single I/O.  If that was the normal case, the storage “I/O Time Saved” for each I/O would be roughly 2x the actual storage I/O time due to prevention of fragmentation.

RecentComments

Comment RSS

Month List

Calendar

<<  February 2019  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
25262728123
45678910

View posts in large calendar