Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Top 10 Webinar Questions – Our Experts Get Technical

by Marissa Newman 7. January 2020 12:58

As we enter the new year and reflect on the 25 live webinars that we held in 2019, we are thrilled with the level of interaction and thought we’d take a look back at some of the great questions asked during the lively Q&A sessions. Here are the top questions and the responses that our technical experts gave.

 

Q. We run a Windows VM on a Microsoft Azure, is your product still applicable?

A. Yes. Whether the Windows system is a physical system or a virtual system, it still runs into the I/O tax and the I/O blender effect.  Both which will degrade the system performance.  Whether the system is on premise or in the cloud, V-locity® can optimize and improve performance.

 

Q. If a server is dedicated to running multiple SQL jobs for different applications, would you recommend installing V-locity?

A. Yes, we would definitely recommend using V-locity. However, the software is not specific to SQL instances, as it looks to improve the I/O performance on any system. SQL just happens to be a sweet spot because of how I/O intensive it is.

 

Q. Will V-locity/Diskeeper® help with the performance of my backup jobs?

A. We have a lot of customers that buy the software to increase their backup performance because their backup windows are going past the time they have allotted to do the backup. We’ve had some great success stories of customers that have reduced their backup windows by putting our software on their system.

 

Q. Does the software work in physical environments?

A. Yes, although we are showing how the software provides benefits in a virtual environment, the same performance gains can be had on physical systems. That same I/O tax and blender effect that degrade performance on virtual systems can also happen on physical systems. The I/O tax occurs on any Windows systems when nice, sequential I/O is broken up into less efficient smaller, random I/O, which can also apply to physical workstation environments. The Blender Effect that we see when all of those small, random I/Os from multiple VMs have to get sorted by the Hypervisor and can occur on physical environments too. For example, when multiple physical systems are read/writing to different LUNs on the same SAN.

 

Q. What about the safety of this caching? If the system crashes, how safe is my data?

A. The software uses read-only caching, as data integrity is our #1 priority when we develop these products. With read-only caching, the data that’s in our cache is already in your storage. So, if the system unexpectedly goes down (i.e. Power outage), it’s okay because that data in cache is already on your storage and completely safe.

 

Q. How does your read cache differ from SQL that has its own data cache?

A. SQL is not too smart or efficient with how it uses your valuable available memory. It tries to load up all of its databases as much as it can to the available memory that is there, even though some of the databases or parts of those database aren’t even being accessed. Most of the time, your databases are much larger than the amount of memory you have so it can never fit everything. Our software is smarter in that it can determine the best blocks of data to optimize in order to get the best performance gains. Additionally, the software will also be caching other noisy I/Os from the system that can improve performance on the SQL server.

 

Q. In a Virtual environment, does the software get installed on the Host or the VMs?

A. The software gets installed on the actual VMs that are running Windows, because that’s where the I/Os are getting created by the applications and the best place to start optimizing. Now, that doesn’t necessarily mean that it has to get installed on all of the VMs on a host. You can put it just on the VMs that are getting hit the most with I/O activity, but we’ve seen the best performance gains if it gets installed on all of the VMs on that host because if you only optimize one VM, you still have the other VMs causing performance degradation issues on that same network. By putting the software on all of them, you’ll get optimal performance all around.

 

Q. Is your product needed if I have SSDs as my storage back-end?

A. Our patented I/O reduction solutions are very relevant in an SSD environment. By reducing random write I/Os to back end SSD’s, we also help mitigate and reduce write amplification issues. We keep SSDs running at “like new” performance levels. And, although SSDs are much faster than HDDs, the DRAM used in the product’s intelligent caching feature is 10x-15x faster than SSDs. We have many published customer use cases showing the benefits of our products on SSD based systems. Many of our customers have experienced 50, 100, even 300% performance gains in an all flash/SSD environment!

 

Q. Do we need to increase our RAM capacity to utilize your software?

A. That is one of the unique Set-It-and-Forget-It features of this product. The software will just use the available memory that’s not being used at the time and will give it back if the system or user applications need it. If there’s no available memory on the system, you just won’t be able to take advantage of the caching. So, if there’s not enough available RAM, we do recommend adding some to take advantage of the caching, but of course you’re always going to get the advantage of all the other technology if you can’t add RAM. Best practice is to reserve 4-8GB at a minimum.

 

Q. What teams can benefit most from the software? The SQL Server Team/Network Team/Applications Development Team?

A. The software can really benefit everyone. SQL Servers are usually very I/O intensive, so performance can be improved because we’re reducing I/O in the environment, but any system/applications (like File Server or Exchange Server) that are I/O intensive will benefit. The whole throughput and network team can benefit from it because it decreases the meta traffic that has to go through the network to storage, so it increases bandwidth for others. Because the software also improves and reduces I/O across all Microsoft applications, it really can benefit everyone in the environment.

 

There you have it – our top 10 questions asked during our informative webinars! Have more questions? Check out our FAQs, ask us in the comments below or send an email to info@condusiv.com.

Tags:

Application Performance | SSD, Solid State, Flash

What Condusiv’s Diskeeper Does for Me

by Tim Warner, Microsoft Cloud & Datacenter MVP 4. November 2019 05:36

I'm a person who uses what he has. For example, my friend Roger purchased a fancy new keyboard, but only uses it on "special occasions" because he wants to keep the hardware in pristine condition. That isn't me--my stuff wears out because I rely on it and use it continuously.

To this point, the storage on my Windows 10 workstation computer takes a heavy beating because I read and write data to my hard drives every single day. I have quite an assortment of fixed drives on this machine:

·         mechanical hard disk drive (HDD)

·         "no moving parts" solid state drive (SSD)

·         hybrid SSD/HDD drive

Today I'd like to share with you some ways that Condusiv’s Diskeeper helps me stay productive. Trust me--I'm no salesperson. Condusiv isn't paying me to write this article. My goal is to share my experience with you so you have something to think about in terms of disk optimization options for your server and workstation storage.

Diskeeper® or SSDKeeper®?

I've used Diskeeper on my servers and workstations since 2000. How time flies! A few years ago it confused me when Condusiv released SSDkeeper, their SSD optimization tool that works by optimizing your data as its written to disk.

Specifically, my confusion lay in the fact you can't have Diskeeper and SSDkeeper installed on the same machine simultaneously. As you saw, I almost always have a mixture of HDD and SSD drives. What am I losing by installing either Diskeeper or SSDkeeper, but not both?

You lose nothing because Diskeeper and SSDkeeper share most of the same features. Diskeeper like SSDkeeper can optimize solid-state disks using IntelliMemory Caching and IntelliWrite, and SSDkeeper like Diskeeper can optimize magnetic disks using Instant Defrag. Both products can automatically determine the storage type and apply the optimal technology.

Thus, your decision of whether to purchase Diskeeper or SSDkeeper is based on which technology the majority of your disks use, either HDD or SSD.

Allow me to explain what those three product features mean in practice:

·         IntelliMemory®: Uses unallocated system random access memory (RAM) for disk read caching

·         IntelliWrite®: Prevents fragmentation in the first place by writing data sequentially to your hard drive as its

created

·         Instant Defrag™: Uses a Windows service to perform "just in time" disk defragmentation

In Diskeeper, click Settings > System > Basic to verify you're taking advantage of these features. I show you the interface in Figure 1.

 

Figure 1. Diskeeper settings.

What about external drives?

In Condusiv's Top FAQs document you'll note that Diskeeper no longer supports external drives. Their justification for this decision is that their customers generally do not use external USB drives for high performance, input/output (I/O) intensive applications.

If you want to run optimization on external drives, you can do that graphically with the Optimize Drives Windows 10 utility, or you can run defrag.exe from an elevated command prompt.

For example, here I am running a fragmentation analysis on my H: volume, an external SATA HDD:

PS C:\users\tim> defrag H: /A

Microsoft Drive Optimizer

Copyright (c) Microsoft Corp.

Invoking analysis on TWARNER1 (H:)...

The operation completed successfully.

Post Defragmentation Report:

         Volume Information:

                Volume size                 = 1.81 TB

                Free space                  = 1.44 TB

                Total fragmented space      = 0%

                Largest free space size     = 1.43 TB

         Note: File fragments larger than 64MB are not included in the fragmentation statistics.

         You do not need to defragment this volume.

PS C:\users\tim>              

Let's look at the numbers!

All Condusiv products make it simple to perform benchmark analyses and run progress reports, and Diskeeper is no exception to this rule. Look at Figure 2--since I rebuilt my Windows 10 workstation and installed Diskeeper in July 2018, I've saved over 20 days of storage I/O time!

Figure 2. Diskeeper dashboard.

Those impressive I/O numbers don't strain credulity when you remember that Diskeeper aggregates I/O values across all my fixed drives, not only one. This time saving is the chief benefit Diskeeper gives me as a working IT professional. The tool gives me back seconds that otherwise I'd spend waiting on disk operations to complete; I then can use that time for productive work instead.

Even more to the point, Diskeeper does this work for me in the background, without my having to remember to run or even schedule defragmentation and optimization jobs. I'm a huge fan of Diskeeper, and I hope you will be.

Recommendation

Condusiv offers a free 30-day trial that you can download and see how much time it can save you:

Diskeeper 30-day trial

SSDkeeper 30-day trial

Note: If you have a virtual environment, you can download a 30-day trial of Condusiv’s V-locity (you can also see my review of V-locity 7).

 

Timothy Warner is a Microsoft Most Valuable Professional (MVP) in Cloud and Datacenter Management who is based in Nashville, TN. His professional specialties include Microsoft Azure, cross-platform PowerShell, and all things Windows Server-related. You can reach Tim via Twitter (@TechTrainerTim), LinkedIn or his website, techtrainertim.com.

  

Do I Really Need V-locity on All VMs?

by Rick Cadruvi, Chief Architect 15. August 2019 04:12

V-locity® customers may wonder, “How many VMs do I need to install V-locity on for optimal results? What kind of benefits will I see with V-locity on one or two VMs versus all the VMs on a host?” 

As a refresher…

It is true that V-locity will likely provide significant benefit on that one VM.  It may even be extraordinary.  But loading V-locity on just one VM on a host with sometimes dozens of VMs won’t give you the biggest bang for your buck. V-locity includes many technologies that address storage performance issues in an extremely intelligent manner.  Part of the underlying design is to learn about the specific loads your system has and intelligently adapt to each specific environment presented to it.   That’s why we created V-locity especially for virtual environments in the first place. 

As you have experienced, the beauty of V-locity is its ability to deal with the I/O Blender Effect.  When there are multiple VMs on a host, or multiple hosts with VMs that use the same back-end storage system (e.g., a SAN) a “blender” effect occurs when all these VMs are sending I/O requests up and down the stack.  As you can guess, it can create huge performance bottlenecks. In fact, perhaps the most significant issue that virtualized environments face is the fact that there are MANY performance chokepoints in the ecosystem, especially the storage subsystem.  These chokepoints are robbing 30-50% of your throughput.  This is the dark side of virtualized systems. 

Look at it this way.  VM “A” may have different resource requirements than VM “B” and so on.  Besides performing different tasks with different workloads, they may have different peak usage periods.  What happens when those peaks overlap?  Worse yet, what happens if several of your VMs have very similar resource requirements and workloads that constantly overlap? 

 

The answer is that the I/O Blender Effect takes over and now VM “A” is competing directly with VM “B” and VM “C” and so on.  The blender pours all those resource desires into a funnel, creating bottlenecks with unpredictable performance results.  What is predictable is that performance will suffer, and likely a LOT.

V-locity was designed from the ground up to intelligently deal with these core issues.  The guiding question in front of us as it was being designed and engineered, was: 

Given your workload and resources, how can V-locity help you overcome the I/O Blender Effect? 

By making sure that V-locity will adapt to your specific workload and having studied what kinds of I/Os amplify the I/O Blender Effect, we were able to add intelligence to specifically go after those I/Os.  We take a global view.  We aren’t limited to a specific application or workload.  While we do have technologies that shine under certain workloads, such as transactional SQL applications, our goal is to optimize the entire ecosystem.  That’s the only way to overcome the I/O Blender Effect.

So, while we can indeed give you great gains on a single VM, V-locity truly gets to shine and show off its purpose when it can intelligently deal with the chokepoints that create the I/O Blender Effect.  That means you should add V-locity to ALL your VMs.  With our no-reboot installation and a V-locity Management Console, it’s fast and easy to cover and manage your environment.

If you have V-locity on all the VMs on your host(s), let us know how it is going! If you don’t yet, contact your account manager who can get you set up!

For an in-depth refresher,  watch our 10-min whiteboard video

 

SysAdmins Discover That Size Really Does Matter

by Spencer Allingham 25. April 2019 03:53

(...to storage transfer speeds...)

 

I was recently asked what could be done to maximize storage transfer speeds in physical and virtual Windows servers. Not the "sexiest" topic for a blog post, I know, but it should be interesting reading for any SysAdmin who wants to get the most performance from their IT environment, or for those IT Administrators who suffer from user or customer complaints about system performance.

 

As it happens, I had just completed some testing on this very subject and thought it would be helpful to share the results publicly in this article.

The crux of the matter comes down to storage I/O size and its effect on data transfer speeds. You can see in this set of results using an NVME-connected SSD (Samsung MZVKW1T0HMLH Model SM961), that the read and write transfer speeds, or put another way, how much data can be transferred each second is MUCH less when the storage I/O sizes are below 64 KB in size:

 

You can see that whilst the transfer rate maxes out at around 1.5 GB per second for writes and around 3.2 GB per second for reads, when the storage I/O sizes are smaller, you don’t see disk transfer speeds at anywhere near that maximum rate. And that’s okay if you’re only saving 4 KB or 8 KB of data, but is definitely NOT okay if you are trying to write a larger amount of data, say 128 KB or a couple of megabytes, and the Windows OS is breaking that down into smaller I/O packets in the background and transferring to and from disk at those much slower transfer rates. This happens way too often and means that the Windows OS is dampening efficiency and transferring your data at a much slower transfer rate than it could, or it should. That can have a very negative impact on the performance of your most important applications, and yes, they are probably the ones that users are accessing the most and are most likely to complain about.

 

The good news of course, is that the V-locity® software from Condusiv® Technologies is designed to prevent these split I/O situations in Windows virtual machines, and Diskeeper® will do the same for physical Windows systems. Installing Condusiv’s software is a quick, easy and effective fix as there is no disruption, no code changes required and no reboots. Just install our software and you are done!

You can even run this test for yourself on your own machine. Download a free copy of ATTO Disk Benchmark from The Web and install it. You can then click its Start button to quickly get a benchmark of how quickly YOUR system processes data transfer speeds at different sizes. I bet you quickly see that when it comes to data transfer speeds, size really does matter!

Out of interest, I enabled our Diskeeper software (I could have used V-locity instead) so that our RAM caching would assist the speed of the read I/O traffic, and the results were pretty amazing. Instead of the reads maxing out at around 3.2 GB per second, they were now maxing out at around a whopping 11 GB per second, more than three times faster. In fact, the ATTO Disk Benchmark software had to change the graph scale for the transfer rate (X-axis) from 4 GB/s to 20 GB/s, just to accommodate the extra GBs per second when the RAM cache was in play. Pretty cool, eh?

 

Of course, it is unrealistic to expect our software’s RAM cache to satisfy ALL of the read I/O traffic in a real live environment as with this lab test, but even if you satisfied only 25% of the reads from RAM in this manner, it certainly wouldn’t hurt performance!!!

If you want to see this for yourself on one of your computers, download the ATTO Disk Benchmark tool from The Web, if you haven’t already, and as mentioned before, run it to get a benchmark for your machine. Then download and install a free trial copy of Diskeeper for physical clients or servers, or V-locity for virtual machines from www.condusiv.com/try and run the ATTO Disk Benchmark tool several times. It will probably take a few runs of the test, but you should easily see the point at which the telemetry in Condusiv’s software identifies the correct data to satisfy from the RAM cache, as the read transfer rates will increase dramatically. They are no longer being confined to the speed of your disk storage, but instead are now happening at the speed of RAM. Much faster, even if that disk storage IS an NVME-connected SSD. And yes, if you’re wondering, this does work with SAN storage and all levels of RAID too!

NOTE: Before testing, make sure you have enough “unused” RAM to cache with. A minimum of 4 GB to 6 GB of Available Physical Memory is perfect.

Whether you have spinning hard drives or SSDs in your storage array, the boost in read data transfer rates can make a real difference. Whatever storage you have serving YOUR Windows computers, it just doesn’t make sense to allow the Windows operating system to continue transferring data at a slower speed than it should. Now with easy to install, “Set It and Forget It®” software from Condusiv Technologiesyou can be sure that you’re getting all of the speed and performance you paid for when you purchased your equipment, through larger, more sequential storage I/O and the benefit of intelligent RAM caching.

If you’re still not sure, run the tests for yourself and see.

Size DOES matter!

Can you relate? 906 IT Pros Talk About I/O Performance, Application Troubles and More

by Dawn Richcreek 8. January 2019 03:44

We just completed our 5th annual I/O Performance Survey that was conducted with 906 IT Professionals. This is the industry’s largest study of its kind and the research highlights the latest trends in applications that are driving performance demands and how IT Professionals are responding.

I/O Growth Continues to Outpace Expectations

The results show that organizations are struggling to get the full lifecycle from their backend storage as the growth of I/O continues to outpace expectations. The research also shows that IT Pros continue to struggle with user complaints related to sluggish performance from their I/O intensive applications, especially citing MS-SQL applications.

Comprehensive Research Data

The survey consists of 27 detailed questions designed to identify the impact of I/O growth in the modern IT environment. In addition to multiple choice questions, the survey included optional open responses, allowing a respondent to provide commentary on why they selected a particular answer.  All the individual responses have been included to help readers dive deeply on any question. The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

Summary of Key Findings 

1.    I/O Performance is important to IT Pros: The vast majority of IT Pros consider I/O Performance an important part of their job responsibilities. Over a third of these note that growth of I/O from applications is outpacing the useful lifecycle they expect from their underlying storage. 

2.    Application performance is suffering: Half of the IT Pros responsible for I/O performance cite they currently have applications that are tough to support from a systems performance standpoint. The toughest applications stated were: SQL, SAP, Custom/Proprietary apps, Oracle, ERP, Exchange, Database, Data Warehouse, Dynamics, SharePoint, and EMR/EHR. See page 20 for a word cloud graphic. 

3.    SQL is the top troublesome application: The survey confirms that SQL databases are the top business critical application platform and is also the environment that generates the most storage I/O traffic. Nearly a third of the IT Pros responsible for I/O performance state that they are currently experiencing staff/customer complaints due to sluggish applications running on SQL. 

4.    Buying hardware has not solved the performance problems: Nearly three-fourths of IT Pros have added new hardware to improve I/O performance. They have purchased new servers with more cores, new all-flash arrays, new hybrid arrays, server-side SSDs, etc. and yet they still have concerns. In fact, a third have performance concerns that are preventing them from scaling their virtualized infrastructures.  

5.    Still planning to buy hardware: About three-fourths of IT Pros are still planning to continue to invest in hardware to improve I/O performance. 

6.    Lack of awareness: Over half of respondents were unaware of the fact that Windows write inefficiencies generate increasingly smaller writes and reads that dampen performance and that this is a software problem that is not solved by adding new hardware. 

7.    Improve performance via software to avoid expensive hardware purchase: The vast majority of respondents felt it would be urgent/important to improve the performance of their applications via an inexpensive I/O reduction software and avoid an expensive forklift upgrade to their compute, network or storage layers. 

Most Difficult to Support Applications

Below is a word cloud representing hundreds of answers to visually show the application environments IT Pros are having the most trouble to support from a performance standpoint. I think you can see the big ones that pop out!

The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

 

The Simple Software Answer

As much as organizations continue to reactively respond to performance challenges by purchasing expensive new server and storage hardware, our V-locity® I/O reduction software offers a far more efficient path by guaranteeing to solve the toughest application performance challenges on I/O intensive systems like MS-SQL. This means organizations are able to offload 50% of I/O traffic from storage that is nothing but mere noise chewing up IOPS and dampening performance. As soon as we open up 50% of that bandwidth to storage, sluggish performance disappears and now there’s far more storage IOPS to be used for other things.

In just 2 minutes, learn more about how V-locity I/O reduction software eliminates the two big I/O inefficiencies in a virtual environment 2-min Video: Condusiv® I/O Reduction Software Overview

Try it for yourself, download our free 30-day trial – no reboot required

 

RecentComments

Comment RSS

Month List

Calendar

<<  January 2020  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar