Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

How To Get The Most Out Of Your Flash Storage Or Move To Cloud

by Rick Cadruvi, Chief Architect 2. January 2020 10:41

You just went out and upgraded your storage to all-flash.  Or, maybe you have moved your systems to the cloud where you can choose the SLA to get the performance you want.  We can provide you with a secret weapon that will make you continue to look like a hero and get the real performance you made these choices for.


Let’s start with why you made those choices to start with.  Why did you make the change?  Why not just upgrade the aging storage to a new-gen HDD or hybrid storage subsystem?  After all, if you’re like most of us, you’re still experiencing explosive growth in data and HDDs continue to be more cost-effective for whatever data requirements you’re going to need in the future.

 

If you went to all-flash, perhaps it was the decreasing cost that made it more approachable from a budgetary point of view and the obvious gain in speed made it easy to justify.

 

If it was a move to the cloud, there may have been many reasons including:

   •  Not having to maintain the infrastructure anymore

   •  More flexibility to quickly add additional resources as needed

   •  Ability to pay for the SLA you need to match application needs to end user performance

Good choices.  So, what can Diskeeper® and V-locity® do to help make these even better choices to provide the expected performance results at peak times when needed most?

 

Let’s start with a brief conversation about I/O bottlenecks.

 

If you have an All-Flash Array, you still have a network connection between your system and your storage array.  If you have local flash storage, system memory is still faster, but your data size requirements make it a limited resource. 

 

If you’re on the cloud, you’re still competing for resources.  And, at peak times, you’ll have slows due to resource contention.  Plus, you will experience issues because of File System and Operating System overhead. 

 

File fragmentation creates significant increases in the number of I/Os that have to be requested for your applications to process the data they need to.  Free Space fragmentation adds overhead to allocating file space and makes file fragmentation far more likely.

 

Then there is all the I/Os that Windows creates that are not directly related to your application’s data access.  And then you have utilities to deal with anti-malware, data recovery, etc....  And trust me, there are LOTs of those.

 

At Condusiv, we’ve watched the dramatic changes in storage and data for a long time.  The one constant we have seen is that your needs will always accelerate past the current generation of technologies you use.  We also handle the issues that aren’t handled by the next generation of hardware.  Let’s take just a minute and talk about that.

 

What about all the I/O overhead created in the background by Windows or your anti-malware and other system utility software packages?  What about the I/Os that your application doesn’t bother to optimize because it isn’t the primary data being accessed?  Those I/Os account for a LOT of I/O bandwidth.  We refer to those as “noisy” I/Os.  They are necessary, but not the data your application is actually trying to process.  And, what about all the I/Os to the storage subsystem from other compute nodes?  We refer to that problem as the I/O Blender Effect.

 

 

Our RAM caching technologies are highly optimized to use a small amount of RAM resources to eliminate the maximum amount of I/O overhead.  It does it dynamically so that when you need RAM the most, we will free it up for your needs.  Then, when RAM is available, we will use it to remove the I/Os causing the most overhead.  A small amount of free RAM will go a long way towards reducing the I/O overhead problem.  That’s because our caching algorithms look at how to eliminate the most I/O overhead effectively.  We don’t use LIFO or FIFO algorithms hoping to eliminate I/Os.  Our algorithm uses empirical data, in real-time to guarantee maximum I/O overhead elimination while using minimal resources.

 

Defragmenting all your files that are fragmented is not reasonable due to data explosion.  Plus, you didn’t spend your money to have our software use it to make it look pretty.  We knew this long before you ever did.  As a result, we created technologies to prevent fragmentation in the first place.  And, we created technologies to empirically locate just those files that are causing extra overhead due to fragmentation so we can address those files only and therefore get the most bang for the buck in terms of I/O density.

 

Between our caching and file optimization technologies, we will make sure you keep getting the performance you hoped for when you need it the most.  And, of course, you will continue to be the superstar to the end users and your boss.  I call that a Win-Win. 😊

 

Finally, we continue to look in our crystal ball for the next set of I/O performance issues that will be coming up that others aren’t thinking before they appear in the first place.  You can rest assured we will have solutions for those problems long before you ever experience them.

 

##

 

Additional and related resources:

 

Windows is still Windows Whether in the Cloud, on Hyperconverged or All-flash

Why Faster Storage May NOT Fix It

How to make NVMe storage even faster

Trial Downloads

 

Tags:

Comments (1) -

1/31/2020 4:14:25 AM #

Hi Jenna,

Thanks for commenting. Glad the blog helped you.

Kellie

Kellie United States

Add comment

  Country flag

biuquote
  • Comment
  • Preview
Loading

RecentComments

Comment RSS

Month List

Calendar

<<  February 2020  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
2425262728291
2345678

View posts in large calendar