Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Doing it All: The Internet of Things and the Data Tsunami

by Dawn Richcreek 7. August 2018 15:44

“If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement…”

For a while there, it looked like corporate IT resource planning was going to be easy. Organizations would move practically everything to the cloud, lean on their cloud service suppliers to maintain performance, cut back on operating expenses for local computing, and reduce—or at least stabilize—overall cost.

Unfortunately, that prediction didn’t reckon with the Internet of Things (IoT), which, in terms of both size and importance, is exploding.

What’s the “edge?”

It varies. To a telecom, the edge could be a cell phone, or a cell tower. To a manufacturer, it could be a machine on a shop floor. To a hospital, it could be a pacemaker. What’s important is that edge computing allows data to be analyzed in near real time, allowing actions to take place at a speed that would be impossible in a cloud-based environment. 

(Consider, for example, a self-driving car. The onboard optics spot a baby carriage in an upcoming crosswalk. There isn’t time for that information to be sent upstream to a cloud-based application, processed, and an instruction returned before slamming on the brakes.)

Meanwhile, the need for massive data processing and analytics continues to grow, creating a kind of digital arms race between data creation and the ability to store and analyze it. In the life sciences, for instance, it’s estimated that only 5% of the data ever created has been analyzed.

Condusiv® CEO Jim D’Arezzo was interviewed by App Development magazine (which publishes news to 50,000 IT pros) on this very topic, in an article entitled “Edge computing has a need for speed.” Noting that edge computing is predicted to grow at a CAGR of 46% between now and 2022, Jim said, “If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement. For that to happen, your I/O capacity and SQL performance need to be optimized. And, given the realities of edge computing, so do your desktops and laptops.”

At Condusiv, we’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

A Deep Dive Into The I/O Performance Dashboard

by Howard Butler 2. August 2018 08:36

While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity.  The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams. 

By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days.  The data displayed will automatically be updated to reflect the time frame selected.

 

The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).

While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure. 

IOPS is the number of I/Os completed in 1 second of time.  This is a measurement of both read and write I/O operations.  MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system.  Taken together they represent speed and throughput efficiency.  One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level.  This is where the rubber meets the road.  Each I/O that passes through the Windows storage driver has a start and completion time stamp.  The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing.  Measurements at the storage device do not account for network, host, and hypervisor congestion.  Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency.  In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes.  This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.

The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache. 

 

A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.

Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache.  The purpose of our patented read caching technology is twofold.  Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache.  So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms.  Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix.  Eliminating I/O traffic from going to storage is what it’s all about.  The fewer I/Os to storage, the faster and more data your applications will be able to access.

In addition, we show the average, minimum, and maximum values for free memory used by the cache.  For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free).  The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit.  The memory values will be displayed in red if the Total Memory is less than 3GB.

Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.

Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.

It should be noted the IntelliMemory read cache is dynamic and self-learning.  This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache.  IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use.  As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage.  There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache.  The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic. 

So, there you have it.  I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.

You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try

Solving the IO Blender Effect with Software-Based Caching

by Spencer Allingham 5. July 2018 07:30

First, let me explain exactly what the IO Blender Effect is, and why it causes a problem in virtualized environments such as those from VMware or Microsoft’s Hyper-V.



This is typically what storage IO traffic would look like when everything is working well. You have the least number of storage IO packets, each carrying a large payload of data down to the storage. Because the data is arriving in large chunks at a time, the storage controller has the opportunity to create large stripes across its media, using the least number of storage-level operations before being able to acknowledge that the write has been successful.



Unfortunately, all too often the Windows Write Driver is forced to split data that it’s writing into many more, much smaller IO packets. These split IO situations cause data to be transferred far less efficiently, and this adds overhead to each write and subsequent read. Now that the storage controller is only receiving data in much smaller chunks at a time, it can only create much smaller stripes across its media, meaning many more storage operations are required to process each gigabyte of storage IO traffic.


This is not only true when writing data, but also if you need to read that data back at some later time.

But what does this really mean in real-world terms?

It means that an average gigabyte of storage IO traffic that should take perhaps 2,000 or 3,000 storage IO packets to complete, is now taking 30,000, or 40,000 storage IO packets instead. The data transfer has been split into many more, much smaller, fractured IO packets. Each storage IO operation that has to be generated takes a measurable amount of time and system resource to process, and so this is bad for performance! It will cause your workloads to run slower than they should, and this will worsen over time unless you perform some time and resource-costly maintenance.

So, what about the IO Blender Effect?

Well, the IO Blender Effect can amplify the performance penalty (or Windows IO Performance Tax) in a virtualized environment. Here’s how it works…

 

As the small, fractured IO traffic from several virtual machines passes through the physical host hypervisor (Hyper-V server or VMware ESX server), the hypervisor acts like a blender. It mixes these IO streams, which causes a randomization of the storage IO packets, before sending out what is now a chaotic mess of small, fractured and now very random IO streams out to the storage controller.

It doesn’t matter what type of storage you have on the back-end. It could be direct attached disks in the physical host machine, or a Storage Area Network (SAN), this type of storage IO profile couldn’t be less storage-friendly.

The storage is now only receiving data in small chunks at a time, and won’t understand the relationship between the packets, so it now only has the opportunity to create very small stripes across its media, and that unfortunately means many more storage operations are required before it can send an acknowledgement of the data transfer back up to the Windows operating system that originated it.

How can RAM caching alleviate the problem?

 

Firstly, to be truly effective the RAM caching needs to be done at the Windows operating system layer. This provides the shortest IO path for read IO requests that can be satisfied from server-side RAM, provisioned to each virtual machine. By satisfying as many “Hot Reads” from RAM as possible, you now have a situation where not only are those read requests being satisfied faster, but those requests are now not having to go out to storage. That means less storage IO packets for the hypervisor to blend.

Furthermore, the V-locity® caching software from Condusiv Technologies also employs a patented technology called IntelliWrite®. This intelligently helps the Windows Write Driver make better choices when writing data out to disk, which avoids many of the split IO situations that would then be made worse by the IO Blender Effect. You now get back to that ideal situation of healthy IO; large, sequential writes and reads.

Is RAM caching a disruptive solution?

 

No! Not at all, if done properly.

Condusiv’s V-locity software for virtualised environments is completely non-disruptive to live, running workloads such as SQL Servers, Microsoft Dynamics, Business Information (BI) solutions such as IBM Cognos, or other important workloads such as SAP, Oracle and the such.

In fact, all you need to do to test this for yourself is download a free trialware copy from:

www.condusiv.com/try

Just install it! There are no reboots required, and it will start working in just a couple of minutes. If you decide that it isn’t for you, then uninstall it just as easily. No reboots, no disruption!


I’m a MEDITECH Hospital with SSDs, Is FAL Growth Still an Issue that Risks Downtime?

by Brian Morin 4. December 2017 07:34

Now that many MEDITECH hospitals have gone all-flash for their backend storage, one of the most common questions we field is whether or not there is still downtime risk from the File Attribute List (FAL) growth issue if the data physically lives on solid-state drives (SSDs).

The main reason this question comes up is because MEDITECH requires “defragmentation,” which most admins insinuate as only being a requirement for a spinning disk backend. That misnomer couldn’t be further from the truth as the FAL issue has nothing to do with the backend media but rather the file system itself. Clearly, defragmentation processes are damaging to solid-state media, which is why MEDITECH hospitals turn to Condusiv’s V-locity® I/O reduction software that prevents fragmentation from occurring in the first place and has special engines designed for MEDITECH environments to remediate the FAL from reaching its size limit and causing unscheduled downtime.

The File Attribute List is a Windows NTFS file metadata structure referred to as the FAL. ThFAstructure capointdifferentypeofilattributessucasecuritattributeostandarinformatiosuch acreatioanmodificatiodateandmosimportantlythactuadatcontainewith ithfileFoexamplethFAkeeps tracowheralthdatifothfileThFAactuallcontains pointers to file records thaindicatthlocatioothfildatothvolumeIthadathatbe storeadifferenlogic allocationothvolume (i.e.fragmentation), morpointerarrequired. This iturincreasethsizothFALHereiliethproblemthFAL sizhaauppelimitatioo256KB which is comprised of 8192 attribute entriesWhethalimiireachednmorpointercan be added, whicmeans Nmore datcan baddetthfileAnd, iit is a foldefile whickeeps tracoalthfiletharesidundethafolderNmorfilecabaddeundethafoldefile. Once this occurs, the application crashes, leading to a best case scenario of several hours of unscheduled downtime to resolve.

Although this blog points out MEDITECH customers experiencing this issue, we have seen this FAL problem occur within non-MEDITECH environments like MS-Exchange and MS-SQL, with varying types of backend storage media from HDDs to all-flash arrays. So, what can be done about it?

The logical solution would be–why not just defragment the volume? Wouldn’t that decrease the number of pointers and decrease the FAL size? The problem is that traditional defragmentation actually causes the FAL to grow in size! While it can decrease the number of pointers, it will not decrease the FAL size, but in fact, it can cause the FAL size to groeven larger, making the problem worse even though you are attempting to remediate it.

The only proprietary solution to solve this problem is by using Condusiv’s V-locity® for virtual servers or Diskeeper® Server for physical servers. Included is a special technology called MediWrite®, which helps suppress this issue from occurring in the first place and provides special handling if it has already occurred. MediWrite includes:

>Unique FAL handling: As indicated above,traditional methods of defragmentation will cause the FAL to groeven further in size. MediWrite will detect when files have FAL size issues and will use proprietary methods to prevent FAL growth. This is the only engine of its kind in the industry.

>Unique FAL safe file movement:  V-locity and Diskeeper’s free space consolidation engines automatically detect FAL size issues and automatically deploy the MediWrite feature to resolve.

>Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing new fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

>Unique Offline FAL Consolidation tool: Any MEDITECH hospital that already has an FAL issue can use the embedded offline tool to shrink the FAL-IN-USE size in a very short time (~5 min) as opposed to manual processes that take several hours.

>V-locity and Diskeeper have been endorsed by MEDITECH. Click Here to view.

 

 

It’s Diskeeper Groundhog Day

by Brian Morin 12. July 2017 06:08

Every week, I find myself sending the same email to at least one Diskeeper® customer. Almost every time, it’s a new manager/director/VP who joins the company and hears Diskeeper is running on their physical servers or clients. This is how it goes:

Hi Christopher,

I’m the SVP of WW Sales here and received news XYZ company may not be renewing support on your Diskeeper licenses because of concerns with it on MS-SQL servers attached to SAN storage with SSDs.

Since you own these licenses, I wanted to reach out for a quick 15-min tech conversation only to make sure you understand what you have in the latest version. Many still have legacy ideas of Diskeeper when it was a “defrag” product and not applicable to the new world order of SSDs and modern SANs.

I guarantee the current version of our product will offload anywhere from 30-40% of your I/O traffic from your underlying storage to provide a nice performance boost and give some precious I/O headroom back to that subsystem. Many customers offload >50% of I/O by simply adding more memory, enabling them to sweat their storage assets significantly and use those IOPS for other things. It’s a free upgrade while you are active on your maintenance.

As a primer, you can read this case study we published last month with the University of Illinois in which we doubled performance of their SQL and Oracle applications sitting on all-flash arrays. And if you want the short, short summary of what the technology does now, here’s the 2-min video: https://www.youtube.com/watch?v=Ge-49YYPwBM. This is why Gartner named us Cool Vendor of the Year a couple years back.

The good news is that they have heard of Diskeeper. The bad news is that they still associate it with legacy versions that emphasized defragmentation applicable only to spinning disk. For those customers who virtualized and converted their licenses to V-locity®, we don’t run into this issue.

If you are a current Diskeeper customer and have difficulty educating new management, I suggest setting up a tech review between Condusiv and new team members. If you are running all-flash, an alternative approach others have taken is simply replacing their Diskeeper with SSDkeeper® so they don’t run into old “defrag” objections. The core features are identical and both auto-detect if the storage is HDD or SSD and apply the best optimization method.

Keep in mind, since Diskeeper now proactively eliminates excessively small tiny writes and reads in the first place, the whole concept of “defragmentation” is a dead concept for the most part except in some extenuating circumstances. An example would be a heavily fragmented volume on spinning disk that didn’t have Diskeeper on it and could use a one-time clean up. A good example of this is a new customer last week whose full backup was taking over a day to complete! After running Diskeeper on their physical servers, the backup time was cut in half.

Tags:

Defrag | Diskeeper | SAN | SSD, Solid State, Flash

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar