Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Top 10 Webinar Questions – Our Experts Get Technical

by Marissa Newman 7. January 2020 12:58

As we enter the new year and reflect on the 25 live webinars that we held in 2019, we are thrilled with the level of interaction and thought we’d take a look back at some of the great questions asked during the lively Q&A sessions. Here are the top questions and the responses that our technical experts gave.

 

Q. We run a Windows VM on a Microsoft Azure, is your product still applicable?

A. Yes. Whether the Windows system is a physical system or a virtual system, it still runs into the I/O tax and the I/O blender effect.  Both which will degrade the system performance.  Whether the system is on premise or in the cloud, V-locity® can optimize and improve performance.

 

Q. If a server is dedicated to running multiple SQL jobs for different applications, would you recommend installing V-locity?

A. Yes, we would definitely recommend using V-locity. However, the software is not specific to SQL instances, as it looks to improve the I/O performance on any system. SQL just happens to be a sweet spot because of how I/O intensive it is.

 

Q. Will V-locity/Diskeeper® help with the performance of my backup jobs?

A. We have a lot of customers that buy the software to increase their backup performance because their backup windows are going past the time they have allotted to do the backup. We’ve had some great success stories of customers that have reduced their backup windows by putting our software on their system.

 

Q. Does the software work in physical environments?

A. Yes, although we are showing how the software provides benefits in a virtual environment, the same performance gains can be had on physical systems. That same I/O tax and blender effect that degrade performance on virtual systems can also happen on physical systems. The I/O tax occurs on any Windows systems when nice, sequential I/O is broken up into less efficient smaller, random I/O, which can also apply to physical workstation environments. The Blender Effect that we see when all of those small, random I/Os from multiple VMs have to get sorted by the Hypervisor and can occur on physical environments too. For example, when multiple physical systems are read/writing to different LUNs on the same SAN.

 

Q. What about the safety of this caching? If the system crashes, how safe is my data?

A. The software uses read-only caching, as data integrity is our #1 priority when we develop these products. With read-only caching, the data that’s in our cache is already in your storage. So, if the system unexpectedly goes down (i.e. Power outage), it’s okay because that data in cache is already on your storage and completely safe.

 

Q. How does your read cache differ from SQL that has its own data cache?

A. SQL is not too smart or efficient with how it uses your valuable available memory. It tries to load up all of its databases as much as it can to the available memory that is there, even though some of the databases or parts of those database aren’t even being accessed. Most of the time, your databases are much larger than the amount of memory you have so it can never fit everything. Our software is smarter in that it can determine the best blocks of data to optimize in order to get the best performance gains. Additionally, the software will also be caching other noisy I/Os from the system that can improve performance on the SQL server.

 

Q. In a Virtual environment, does the software get installed on the Host or the VMs?

A. The software gets installed on the actual VMs that are running Windows, because that’s where the I/Os are getting created by the applications and the best place to start optimizing. Now, that doesn’t necessarily mean that it has to get installed on all of the VMs on a host. You can put it just on the VMs that are getting hit the most with I/O activity, but we’ve seen the best performance gains if it gets installed on all of the VMs on that host because if you only optimize one VM, you still have the other VMs causing performance degradation issues on that same network. By putting the software on all of them, you’ll get optimal performance all around.

 

Q. Is your product needed if I have SSDs as my storage back-end?

A. Our patented I/O reduction solutions are very relevant in an SSD environment. By reducing random write I/Os to back end SSD’s, we also help mitigate and reduce write amplification issues. We keep SSDs running at “like new” performance levels. And, although SSDs are much faster than HDDs, the DRAM used in the product’s intelligent caching feature is 10x-15x faster than SSDs. We have many published customer use cases showing the benefits of our products on SSD based systems. Many of our customers have experienced 50, 100, even 300% performance gains in an all flash/SSD environment!

 

Q. Do we need to increase our RAM capacity to utilize your software?

A. That is one of the unique Set-It-and-Forget-It features of this product. The software will just use the available memory that’s not being used at the time and will give it back if the system or user applications need it. If there’s no available memory on the system, you just won’t be able to take advantage of the caching. So, if there’s not enough available RAM, we do recommend adding some to take advantage of the caching, but of course you’re always going to get the advantage of all the other technology if you can’t add RAM. Best practice is to reserve 4-8GB at a minimum.

 

Q. What teams can benefit most from the software? The SQL Server Team/Network Team/Applications Development Team?

A. The software can really benefit everyone. SQL Servers are usually very I/O intensive, so performance can be improved because we’re reducing I/O in the environment, but any system/applications (like File Server or Exchange Server) that are I/O intensive will benefit. The whole throughput and network team can benefit from it because it decreases the meta traffic that has to go through the network to storage, so it increases bandwidth for others. Because the software also improves and reduces I/O across all Microsoft applications, it really can benefit everyone in the environment.

 

There you have it – our top 10 questions asked during our informative webinars! Have more questions? Check out our FAQs, ask us in the comments below or send an email to info@condusiv.com.

Tags:

Application Performance | SSD, Solid State, Flash

How To Get The Most Out Of Your Flash Storage Or Move To Cloud

by Rick Cadruvi, Chief Architect 2. January 2020 10:41

You just went out and upgraded your storage to all-flash.  Or, maybe you have moved your systems to the cloud where you can choose the SLA to get the performance you want.  We can provide you with a secret weapon that will make you continue to look like a hero and get the real performance you made these choices for.


Let’s start with why you made those choices to start with.  Why did you make the change?  Why not just upgrade the aging storage to a new-gen HDD or hybrid storage subsystem?  After all, if you’re like most of us, you’re still experiencing explosive growth in data and HDDs continue to be more cost-effective for whatever data requirements you’re going to need in the future.

 

If you went to all-flash, perhaps it was the decreasing cost that made it more approachable from a budgetary point of view and the obvious gain in speed made it easy to justify.

 

If it was a move to the cloud, there may have been many reasons including:

   •  Not having to maintain the infrastructure anymore

   •  More flexibility to quickly add additional resources as needed

   •  Ability to pay for the SLA you need to match application needs to end user performance

Good choices.  So, what can Diskeeper® and V-locity® do to help make these even better choices to provide the expected performance results at peak times when needed most?

 

Let’s start with a brief conversation about I/O bottlenecks.

 

If you have an All-Flash Array, you still have a network connection between your system and your storage array.  If you have local flash storage, system memory is still faster, but your data size requirements make it a limited resource. 

 

If you’re on the cloud, you’re still competing for resources.  And, at peak times, you’ll have slows due to resource contention.  Plus, you will experience issues because of File System and Operating System overhead. 

 

File fragmentation creates significant increases in the number of I/Os that have to be requested for your applications to process the data they need to.  Free Space fragmentation adds overhead to allocating file space and makes file fragmentation far more likely.

 

Then there is all the I/Os that Windows creates that are not directly related to your application’s data access.  And then you have utilities to deal with anti-malware, data recovery, etc....  And trust me, there are LOTs of those.

 

At Condusiv, we’ve watched the dramatic changes in storage and data for a long time.  The one constant we have seen is that your needs will always accelerate past the current generation of technologies you use.  We also handle the issues that aren’t handled by the next generation of hardware.  Let’s take just a minute and talk about that.

 

What about all the I/O overhead created in the background by Windows or your anti-malware and other system utility software packages?  What about the I/Os that your application doesn’t bother to optimize because it isn’t the primary data being accessed?  Those I/Os account for a LOT of I/O bandwidth.  We refer to those as “noisy” I/Os.  They are necessary, but not the data your application is actually trying to process.  And, what about all the I/Os to the storage subsystem from other compute nodes?  We refer to that problem as the I/O Blender Effect.

 

 

Our RAM caching technologies are highly optimized to use a small amount of RAM resources to eliminate the maximum amount of I/O overhead.  It does it dynamically so that when you need RAM the most, we will free it up for your needs.  Then, when RAM is available, we will use it to remove the I/Os causing the most overhead.  A small amount of free RAM will go a long way towards reducing the I/O overhead problem.  That’s because our caching algorithms look at how to eliminate the most I/O overhead effectively.  We don’t use LIFO or FIFO algorithms hoping to eliminate I/Os.  Our algorithm uses empirical data, in real-time to guarantee maximum I/O overhead elimination while using minimal resources.

 

Defragmenting all your files that are fragmented is not reasonable due to data explosion.  Plus, you didn’t spend your money to have our software use it to make it look pretty.  We knew this long before you ever did.  As a result, we created technologies to prevent fragmentation in the first place.  And, we created technologies to empirically locate just those files that are causing extra overhead due to fragmentation so we can address those files only and therefore get the most bang for the buck in terms of I/O density.

 

Between our caching and file optimization technologies, we will make sure you keep getting the performance you hoped for when you need it the most.  And, of course, you will continue to be the superstar to the end users and your boss.  I call that a Win-Win. 😊

 

Finally, we continue to look in our crystal ball for the next set of I/O performance issues that will be coming up that others aren’t thinking before they appear in the first place.  You can rest assured we will have solutions for those problems long before you ever experience them.

 

##

 

Additional and related resources:

 

Windows is still Windows Whether in the Cloud, on Hyperconverged or All-flash

Why Faster Storage May NOT Fix It

How to make NVMe storage even faster

Trial Downloads

 

Tags:

RecentComments

Comment RSS

Month List

Calendar

<<  January 2020  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar