Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Top 10 Webinar Questions – Our Experts Get Technical

by Marissa Newman 7. January 2020 12:58

As we enter the new year and reflect on the 25 live webinars that we held in 2019, we are thrilled with the level of interaction and thought we’d take a look back at some of the great questions asked during the lively Q&A sessions. Here are the top questions and the responses that our technical experts gave.

 

Q. We run a Windows VM on a Microsoft Azure, is your product still applicable?

A. Yes. Whether the Windows system is a physical system or a virtual system, it still runs into the I/O tax and the I/O blender effect.  Both which will degrade the system performance.  Whether the system is on premise or in the cloud, V-locity® can optimize and improve performance.

 

Q. If a server is dedicated to running multiple SQL jobs for different applications, would you recommend installing V-locity?

A. Yes, we would definitely recommend using V-locity. However, the software is not specific to SQL instances, as it looks to improve the I/O performance on any system. SQL just happens to be a sweet spot because of how I/O intensive it is.

 

Q. Will V-locity/Diskeeper® help with the performance of my backup jobs?

A. We have a lot of customers that buy the software to increase their backup performance because their backup windows are going past the time they have allotted to do the backup. We’ve had some great success stories of customers that have reduced their backup windows by putting our software on their system.

 

Q. Does the software work in physical environments?

A. Yes, although we are showing how the software provides benefits in a virtual environment, the same performance gains can be had on physical systems. That same I/O tax and blender effect that degrade performance on virtual systems can also happen on physical systems. The I/O tax occurs on any Windows systems when nice, sequential I/O is broken up into less efficient smaller, random I/O, which can also apply to physical workstation environments. The Blender Effect that we see when all of those small, random I/Os from multiple VMs have to get sorted by the Hypervisor and can occur on physical environments too. For example, when multiple physical systems are read/writing to different LUNs on the same SAN.

 

Q. What about the safety of this caching? If the system crashes, how safe is my data?

A. The software uses read-only caching, as data integrity is our #1 priority when we develop these products. With read-only caching, the data that’s in our cache is already in your storage. So, if the system unexpectedly goes down (i.e. Power outage), it’s okay because that data in cache is already on your storage and completely safe.

 

Q. How does your read cache differ from SQL that has its own data cache?

A. SQL is not too smart or efficient with how it uses your valuable available memory. It tries to load up all of its databases as much as it can to the available memory that is there, even though some of the databases or parts of those database aren’t even being accessed. Most of the time, your databases are much larger than the amount of memory you have so it can never fit everything. Our software is smarter in that it can determine the best blocks of data to optimize in order to get the best performance gains. Additionally, the software will also be caching other noisy I/Os from the system that can improve performance on the SQL server.

 

Q. In a Virtual environment, does the software get installed on the Host or the VMs?

A. The software gets installed on the actual VMs that are running Windows, because that’s where the I/Os are getting created by the applications and the best place to start optimizing. Now, that doesn’t necessarily mean that it has to get installed on all of the VMs on a host. You can put it just on the VMs that are getting hit the most with I/O activity, but we’ve seen the best performance gains if it gets installed on all of the VMs on that host because if you only optimize one VM, you still have the other VMs causing performance degradation issues on that same network. By putting the software on all of them, you’ll get optimal performance all around.

 

Q. Is your product needed if I have SSDs as my storage back-end?

A. Our patented I/O reduction solutions are very relevant in an SSD environment. By reducing random write I/Os to back end SSD’s, we also help mitigate and reduce write amplification issues. We keep SSDs running at “like new” performance levels. And, although SSDs are much faster than HDDs, the DRAM used in the product’s intelligent caching feature is 10x-15x faster than SSDs. We have many published customer use cases showing the benefits of our products on SSD based systems. Many of our customers have experienced 50, 100, even 300% performance gains in an all flash/SSD environment!

 

Q. Do we need to increase our RAM capacity to utilize your software?

A. That is one of the unique Set-It-and-Forget-It features of this product. The software will just use the available memory that’s not being used at the time and will give it back if the system or user applications need it. If there’s no available memory on the system, you just won’t be able to take advantage of the caching. So, if there’s not enough available RAM, we do recommend adding some to take advantage of the caching, but of course you’re always going to get the advantage of all the other technology if you can’t add RAM. Best practice is to reserve 4-8GB at a minimum.

 

Q. What teams can benefit most from the software? The SQL Server Team/Network Team/Applications Development Team?

A. The software can really benefit everyone. SQL Servers are usually very I/O intensive, so performance can be improved because we’re reducing I/O in the environment, but any system/applications (like File Server or Exchange Server) that are I/O intensive will benefit. The whole throughput and network team can benefit from it because it decreases the meta traffic that has to go through the network to storage, so it increases bandwidth for others. Because the software also improves and reduces I/O across all Microsoft applications, it really can benefit everyone in the environment.

 

There you have it – our top 10 questions asked during our informative webinars! Have more questions? Check out our FAQs, ask us in the comments below or send an email to info@condusiv.com.

Tags:

Application Performance | SSD, Solid State, Flash

How To Get The Most Out Of Your Flash Storage Or Move To Cloud

by Rick Cadruvi, Chief Architect 2. January 2020 10:41

You just went out and upgraded your storage to all-flash.  Or, maybe you have moved your systems to the cloud where you can choose the SLA to get the performance you want.  We can provide you with a secret weapon that will make you continue to look like a hero and get the real performance you made these choices for.


Let’s start with why you made those choices to start with.  Why did you make the change?  Why not just upgrade the aging storage to a new-gen HDD or hybrid storage subsystem?  After all, if you’re like most of us, you’re still experiencing explosive growth in data and HDDs continue to be more cost-effective for whatever data requirements you’re going to need in the future.

 

If you went to all-flash, perhaps it was the decreasing cost that made it more approachable from a budgetary point of view and the obvious gain in speed made it easy to justify.

 

If it was a move to the cloud, there may have been many reasons including:

   •  Not having to maintain the infrastructure anymore

   •  More flexibility to quickly add additional resources as needed

   •  Ability to pay for the SLA you need to match application needs to end user performance

Good choices.  So, what can Diskeeper® and V-locity® do to help make these even better choices to provide the expected performance results at peak times when needed most?

 

Let’s start with a brief conversation about I/O bottlenecks.

 

If you have an All-Flash Array, you still have a network connection between your system and your storage array.  If you have local flash storage, system memory is still faster, but your data size requirements make it a limited resource. 

 

If you’re on the cloud, you’re still competing for resources.  And, at peak times, you’ll have slows due to resource contention.  Plus, you will experience issues because of File System and Operating System overhead. 

 

File fragmentation creates significant increases in the number of I/Os that have to be requested for your applications to process the data they need to.  Free Space fragmentation adds overhead to allocating file space and makes file fragmentation far more likely.

 

Then there is all the I/Os that Windows creates that are not directly related to your application’s data access.  And then you have utilities to deal with anti-malware, data recovery, etc....  And trust me, there are LOTs of those.

 

At Condusiv, we’ve watched the dramatic changes in storage and data for a long time.  The one constant we have seen is that your needs will always accelerate past the current generation of technologies you use.  We also handle the issues that aren’t handled by the next generation of hardware.  Let’s take just a minute and talk about that.

 

What about all the I/O overhead created in the background by Windows or your anti-malware and other system utility software packages?  What about the I/Os that your application doesn’t bother to optimize because it isn’t the primary data being accessed?  Those I/Os account for a LOT of I/O bandwidth.  We refer to those as “noisy” I/Os.  They are necessary, but not the data your application is actually trying to process.  And, what about all the I/Os to the storage subsystem from other compute nodes?  We refer to that problem as the I/O Blender Effect.

 

 

Our RAM caching technologies are highly optimized to use a small amount of RAM resources to eliminate the maximum amount of I/O overhead.  It does it dynamically so that when you need RAM the most, we will free it up for your needs.  Then, when RAM is available, we will use it to remove the I/Os causing the most overhead.  A small amount of free RAM will go a long way towards reducing the I/O overhead problem.  That’s because our caching algorithms look at how to eliminate the most I/O overhead effectively.  We don’t use LIFO or FIFO algorithms hoping to eliminate I/Os.  Our algorithm uses empirical data, in real-time to guarantee maximum I/O overhead elimination while using minimal resources.

 

Defragmenting all your files that are fragmented is not reasonable due to data explosion.  Plus, you didn’t spend your money to have our software use it to make it look pretty.  We knew this long before you ever did.  As a result, we created technologies to prevent fragmentation in the first place.  And, we created technologies to empirically locate just those files that are causing extra overhead due to fragmentation so we can address those files only and therefore get the most bang for the buck in terms of I/O density.

 

Between our caching and file optimization technologies, we will make sure you keep getting the performance you hoped for when you need it the most.  And, of course, you will continue to be the superstar to the end users and your boss.  I call that a Win-Win. 😊

 

Finally, we continue to look in our crystal ball for the next set of I/O performance issues that will be coming up that others aren’t thinking before they appear in the first place.  You can rest assured we will have solutions for those problems long before you ever experience them.

 

##

 

Additional and related resources:

 

Windows is still Windows Whether in the Cloud, on Hyperconverged or All-flash

Why Faster Storage May NOT Fix It

How to make NVMe storage even faster

Trial Downloads

 

Tags:

Causes and Solutions for Latency

by Kim Amezcua 19. December 2019 04:14

Sometimes the slowdown of a Windows server occurs because the device or its operating system is outdated. Other times, the slowdown is due to physical constraints on the retrieval, processing, or transmitting of data. There are other causes as we will cover. In any case, the delay between when a command is made and a response is received is referred to as "latency."

Latency is a measure of time. For example, the latency of a command might be 0.02 seconds. To humans, this seems extraordinarily fast. However, computer processors can execute billions of instructions per second. This means that latency of a few millionths of a second can cause visible delays in the operation of a computer or server.

To figure out how to improve latency, you must identify the source of any latency issues. There are many possible sources of latency and, for each one, there are high latency fixes. Here are two possible causes of latency as well as a brief explanation for how to improve latency. In this case, I/O latency where the computer process is waiting for the I/O to complete, so it can process the data of that I/O, is a   waste of your computer processing power.

Data Fragments

Logical data fragments occur when files are written, deleted, and rewritten to a hard drive or solid-state drive.

When files are deleted from a drive, the files actually still exist on the drive. However, the logical address on the Windows operating file system for those files is freed up for use. This means that "deleted" files remain on the logical drive until another file is written over it by reusing the address. (This also explains why it is possible to recover lost files). 

When an address is re-used, the likelihood that the new file is exactly the same length as the "deleted" file is remote. As a result, little chunks or fragments of data remaining from the "deleted" file remain on the logical drive. As a logical drive fills up, new files are sometimes broken up to fit into the available segments. At its worst, a logical fragmented drive contains both old fragments left over from deleted files (free space fragments) and new fragments that were intentionally created (data file fragments).

Logical data fragments can be a significant source of latency in a computer or server. Storing to, and retrieving from, a fragmented logical drive introduces additional steps in searching for and reassembling files around the fragments. For example, rather than reading a file in one or two I/Os, fragmentation can require hundreds, even thousands of I/Os to read or write that same data.

One way for how to improve latency from these logical data fragments is to defragment the logical drive by collecting data fragments and making them contiguous. The main disadvantages of defragmenting are that it must be repeated periodically because the logical drive will inevitably fragment again and also defragmenting SSDs can cause them to wear out prematurely.

A better method for how to improve latency from disk fragments is to prevent the logical disk from becoming fragmented. Diskeeper® 18 manages writes so that large, contiguous segments are kept together from the very start, thereby preventing the fragments from developing in the first place.

Limited Resources

No matter how "fast" the components of a computer are, they are still finite and tasks must be scheduled and performed in order. Certain tasks must be put off while more urgent tasks are executed. Although the latency in scheduling is often so short that it is unnoticeable, there will be times when limited resources cause enough of a delay that it hampers the computer or server.

For example, two specifications that are commonly used to define the speed of a computer are processor clock speed and instructions per cycle. Although these numbers climb steadily as technology advances, there will always be situations where the processor has too many tasks to execute and must delay some of them to get them all done.

Similarly, data buses and RAM have a particular speed. This speed limits the frequency with which data can be moved to the processor. These kinds of Input/output performance delays can reduce a system’s capacity by more than 50%.

One way to address latency is a method used by Diskeeper® 18. In this method, idle available DRAM is used to cache hot reads. By caching, it eliminates having to travel all the way to the storage infrastructure to read the data; and remember that DRAM can be 10x-15x faster than SSDs and even many factors more than HDDs. This allows faster retrieval of data; in fact, Windows systems can run faster than when new.

Reducing latency is mostly a matter of identifying the source of latencies and addressing them. By being proactive and preventing fragmentation before it happens and by caching hot reads using idle & available DRAM, Diskeeper® 18 makes Windows computers faster and more reliable.

 

What Condusiv’s Diskeeper Does for Me

by Tim Warner, Microsoft Cloud & Datacenter MVP 4. November 2019 05:36

I'm a person who uses what he has. For example, my friend Roger purchased a fancy new keyboard, but only uses it on "special occasions" because he wants to keep the hardware in pristine condition. That isn't me--my stuff wears out because I rely on it and use it continuously.

To this point, the storage on my Windows 10 workstation computer takes a heavy beating because I read and write data to my hard drives every single day. I have quite an assortment of fixed drives on this machine:

·         mechanical hard disk drive (HDD)

·         "no moving parts" solid state drive (SSD)

·         hybrid SSD/HDD drive

Today I'd like to share with you some ways that Condusiv’s Diskeeper helps me stay productive. Trust me--I'm no salesperson. Condusiv isn't paying me to write this article. My goal is to share my experience with you so you have something to think about in terms of disk optimization options for your server and workstation storage.

Diskeeper® or SSDKeeper®?

I've used Diskeeper on my servers and workstations since 2000. How time flies! A few years ago it confused me when Condusiv released SSDkeeper, their SSD optimization tool that works by optimizing your data as its written to disk.

Specifically, my confusion lay in the fact you can't have Diskeeper and SSDkeeper installed on the same machine simultaneously. As you saw, I almost always have a mixture of HDD and SSD drives. What am I losing by installing either Diskeeper or SSDkeeper, but not both?

You lose nothing because Diskeeper and SSDkeeper share most of the same features. Diskeeper like SSDkeeper can optimize solid-state disks using IntelliMemory Caching and IntelliWrite, and SSDkeeper like Diskeeper can optimize magnetic disks using Instant Defrag. Both products can automatically determine the storage type and apply the optimal technology.

Thus, your decision of whether to purchase Diskeeper or SSDkeeper is based on which technology the majority of your disks use, either HDD or SSD.

Allow me to explain what those three product features mean in practice:

·         IntelliMemory®: Uses unallocated system random access memory (RAM) for disk read caching

·         IntelliWrite®: Prevents fragmentation in the first place by writing data sequentially to your hard drive as its

created

·         Instant Defrag™: Uses a Windows service to perform "just in time" disk defragmentation

In Diskeeper, click Settings > System > Basic to verify you're taking advantage of these features. I show you the interface in Figure 1.

 

Figure 1. Diskeeper settings.

What about external drives?

In Condusiv's Top FAQs document you'll note that Diskeeper no longer supports external drives. Their justification for this decision is that their customers generally do not use external USB drives for high performance, input/output (I/O) intensive applications.

If you want to run optimization on external drives, you can do that graphically with the Optimize Drives Windows 10 utility, or you can run defrag.exe from an elevated command prompt.

For example, here I am running a fragmentation analysis on my H: volume, an external SATA HDD:

PS C:\users\tim> defrag H: /A

Microsoft Drive Optimizer

Copyright (c) Microsoft Corp.

Invoking analysis on TWARNER1 (H:)...

The operation completed successfully.

Post Defragmentation Report:

         Volume Information:

                Volume size                 = 1.81 TB

                Free space                  = 1.44 TB

                Total fragmented space      = 0%

                Largest free space size     = 1.43 TB

         Note: File fragments larger than 64MB are not included in the fragmentation statistics.

         You do not need to defragment this volume.

PS C:\users\tim>              

Let's look at the numbers!

All Condusiv products make it simple to perform benchmark analyses and run progress reports, and Diskeeper is no exception to this rule. Look at Figure 2--since I rebuilt my Windows 10 workstation and installed Diskeeper in July 2018, I've saved over 20 days of storage I/O time!

Figure 2. Diskeeper dashboard.

Those impressive I/O numbers don't strain credulity when you remember that Diskeeper aggregates I/O values across all my fixed drives, not only one. This time saving is the chief benefit Diskeeper gives me as a working IT professional. The tool gives me back seconds that otherwise I'd spend waiting on disk operations to complete; I then can use that time for productive work instead.

Even more to the point, Diskeeper does this work for me in the background, without my having to remember to run or even schedule defragmentation and optimization jobs. I'm a huge fan of Diskeeper, and I hope you will be.

Recommendation

Condusiv offers a free 30-day trial that you can download and see how much time it can save you:

Diskeeper 30-day trial

SSDkeeper 30-day trial

Note: If you have a virtual environment, you can download a 30-day trial of Condusiv’s V-locity (you can also see my review of V-locity 7).

 

Timothy Warner is a Microsoft Most Valuable Professional (MVP) in Cloud and Datacenter Management who is based in Nashville, TN. His professional specialties include Microsoft Azure, cross-platform PowerShell, and all things Windows Server-related. You can reach Tim via Twitter (@TechTrainerTim), LinkedIn or his website, techtrainertim.com.

  

How to Recover Deleted Files from Network Shares

by Dawn Richcreek 17. October 2019 04:09

You may have discovered—and too late—that while you can recover some deleted files from the Windows Recycle Bin on local machines, you cannot recover deleted files (accidentally or otherwise) from network drive shared folders. If you delete a file from a network share, it is gone. If you look in the Recycle Bin, it won’t be there. 

This happens because Windows is organized so that deleted files can be captured by the Windows Recycle bin on local drives only. If a user deletes a file on a server from a network shared folder, it isn’t being deleted from the local machine, so the Recycle Bin does not capture it. This is also true of files deleted from attached or removable drives, and files deleted from applications or the Command Prompt. Only files deleted from File Explorer on a machine’s local drive will be saved by the Recycle Bin.

With some types of software, you might be able to recover an earlier saved version of a file deleted from a network shared folder, which would give you the version prior to the deletion. Failing this, the only other way to recover a file deleted from a network share (without a third-party solution—see below) is to have your system administrator retrieve an earlier saved version of the file from the most recent backup. This will only work if:

 

a) A version of the file was actually backed up

b) You can recall the file name so that the system administrator can find it

c) You can recall with some accuracy the time and date when the file was saved. 

 

This method is, of course, extremely time consuming for the sys admin—and for you, too, if you have to wait. 

Even if the previous version can be retrieved, any work done on the file since the last save is lost forever. 

 

Problem Solved: Undelete

Fortunately, there is a very easy and cost-effective solution to this perpetual issue: Undelete® Instant Data Recovery software from Condusiv. 

1. To permanently solve this problem site-wide, download and install Undelete Server, which is extremely fast and simple, and doesn’t require a reboot to complete the installation (something you really don’t want to have to do on a server running databases or applications requiring constant uptime). 

 

 

2. Following installation, the first thing you’ll notice is that the Windows Recycle Bin has been replaced by the Undelete Recovery Bin. The Recovery Bin will not only capture files deleted from network shares, but also files overwritten on the user’s drive, files deleted between backups, and files deleted from the Command Prompt. 

3. Test it for yourself. Create a test file within a network drive shared folder and delete it. You’ll see that your file has, as you would expect, disappeared from the server as well. 

4. Open the Undelete Recovery Bin. You’ll be able to easily navigate to the shared folder from which you deleted the file—and there you’ll find it again. (if you are not an Admin, see Undelete Client below)

 

 

5. You can then select that file and recover it back to its original location, or even to a new location. 

 

 

 

 6. You’re done! That’s how easy it is.

 

Undelete Client

The above example demonstrates a user opening Undelete Server on the respective server to recover the file. Users, however, may not have access to the server, but a system administrator can certainly log on and open Undelete Server to recover the file. 

However, once Undelete is installed on a system, a user can open Undelete on the remote Network Share, follow the above steps, and view and recover their own files. 

 

Buy Undelete Instant Data Recovery now, and always be able to recover deleted files from network shares. 

 

Purchase Online Now https://www.condusiv.com/purchase/Undelete/

 

Request a Volume Quote https://learn.condusiv.com/Volume-Licensing-Undelete.html

 

Download a Free trial https://learn.condusiv.com/LP-Trialware-Undelete.html

Tags:

Data Recovery

RecentComments

Comment RSS

Month List

Calendar

<<  January 2020  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar