Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Fix SQL Server Storage Bottlenecks

by Spencer Allingham 23. October 2018 20:58

No SQL code changes.
No Disruption.
No Reboots.
Simple!

 

Condusiv V-locity Introduction

 

 

Whether running SQL in a physical or virtualized environment, most SQL DBAs would welcome faster storage at a reasonable price.

The V-locity® software from Condusiv® Technologies is designed to provide exactly that, but using the storage hardware that you already own. It doesn't matter if you have direct attached disks, if you're running a tiered SAN, have a tray of SSD storage or are fortunate enough to have an all-flash array; that storage layer can be a limiting factor to your SQL Server database productivity.

The V-locity software reduces the amount of storage I/O traffic that has to go out and be processed by the disk storage layer, and streamlines or optimizes the data which does have to still go out to disk.

The net result is that SQL can typically get more transactions completed in the same amount of time, quite simply because on average, it's not having to wait so much on the storage before being able to get on with its next transaction.

V-locity can be downloaded and installed without any disruption to live SQL servers. No SQL code changes are required and no reboots. Just install and typically you'll start seeing results in just a few minutes.

Microsoft SQL Server I/O Reliability Certification LogoBefore we take a more in-depth look at that, I would like to briefly mention that last year, the V-locity software was awarded the Microsoft SQL Server I/O Reliability Certification. This means that whilst providing faster storage access, V-locity didn't adversely affect the required and recommended behaviors that an I/O subsystem must provide for SQL Server, as defined by Microsoft themselves.

Microsoft ran tests for this in Azure, with SQL 2016, and used HammerDB to generate an online transaction processing type workload. Not only was V-locity able to jump through all the hoops necessary to achieve the certification, but it was also able to show an increase of about 30% more SQL transactions in the same amount of time.

In this test, that meant roughly 30% more orders processed.

They probably could have processed more too, if they had allowed V-locity a slightly larger RAM cache size.

To get more information, including best practise for running V-locity on MS SQL servers, easy ways to validate results, customer case studies and more, click here for the full article on LinkedIn.

If you simply want to try V-locity, click here for a free trial.

Use the V-locity software to not only identify those servers that cause storage I/O issues, but fix those issues at the same time.

Solving the IO Blender Effect with Software-Based Caching

by Spencer Allingham 5. July 2018 07:30

First, let me explain exactly what the IO Blender Effect is, and why it causes a problem in virtualized environments such as those from VMware or Microsoft’s Hyper-V.



This is typically what storage IO traffic would look like when everything is working well. You have the least number of storage IO packets, each carrying a large payload of data down to the storage. Because the data is arriving in large chunks at a time, the storage controller has the opportunity to create large stripes across its media, using the least number of storage-level operations before being able to acknowledge that the write has been successful.



Unfortunately, all too often the Windows Write Driver is forced to split data that it’s writing into many more, much smaller IO packets. These split IO situations cause data to be transferred far less efficiently, and this adds overhead to each write and subsequent read. Now that the storage controller is only receiving data in much smaller chunks at a time, it can only create much smaller stripes across its media, meaning many more storage operations are required to process each gigabyte of storage IO traffic.


This is not only true when writing data, but also if you need to read that data back at some later time.

But what does this really mean in real-world terms?

It means that an average gigabyte of storage IO traffic that should take perhaps 2,000 or 3,000 storage IO packets to complete, is now taking 30,000, or 40,000 storage IO packets instead. The data transfer has been split into many more, much smaller, fractured IO packets. Each storage IO operation that has to be generated takes a measurable amount of time and system resource to process, and so this is bad for performance! It will cause your workloads to run slower than they should, and this will worsen over time unless you perform some time and resource-costly maintenance.

So, what about the IO Blender Effect?

Well, the IO Blender Effect can amplify the performance penalty (or Windows IO Performance Tax) in a virtualized environment. Here’s how it works…

 

As the small, fractured IO traffic from several virtual machines passes through the physical host hypervisor (Hyper-V server or VMware ESX server), the hypervisor acts like a blender. It mixes these IO streams, which causes a randomization of the storage IO packets, before sending out what is now a chaotic mess of small, fractured and now very random IO streams out to the storage controller.

It doesn’t matter what type of storage you have on the back-end. It could be direct attached disks in the physical host machine, or a Storage Area Network (SAN), this type of storage IO profile couldn’t be less storage-friendly.

The storage is now only receiving data in small chunks at a time, and won’t understand the relationship between the packets, so it now only has the opportunity to create very small stripes across its media, and that unfortunately means many more storage operations are required before it can send an acknowledgement of the data transfer back up to the Windows operating system that originated it.

How can RAM caching alleviate the problem?

 

Firstly, to be truly effective the RAM caching needs to be done at the Windows operating system layer. This provides the shortest IO path for read IO requests that can be satisfied from server-side RAM, provisioned to each virtual machine. By satisfying as many “Hot Reads” from RAM as possible, you now have a situation where not only are those read requests being satisfied faster, but those requests are now not having to go out to storage. That means less storage IO packets for the hypervisor to blend.

Furthermore, the V-locity® caching software from Condusiv Technologies also employs a patented technology called IntelliWrite®. This intelligently helps the Windows Write Driver make better choices when writing data out to disk, which avoids many of the split IO situations that would then be made worse by the IO Blender Effect. You now get back to that ideal situation of healthy IO; large, sequential writes and reads.

Is RAM caching a disruptive solution?

 

No! Not at all, if done properly.

Condusiv’s V-locity software for virtualised environments is completely non-disruptive to live, running workloads such as SQL Servers, Microsoft Dynamics, Business Information (BI) solutions such as IBM Cognos, or other important workloads such as SAP, Oracle and the such.

In fact, all you need to do to test this for yourself is download a free trialware copy from:

www.condusiv.com/try

Just install it! There are no reboots required, and it will start working in just a couple of minutes. If you decide that it isn’t for you, then uninstall it just as easily. No reboots, no disruption!


Help! I hit “Save” instead of “Save As”!!

by Gary Quan 19. June 2018 06:30

Need to get back to a previous version of a Microsoft Office file before the changes you just made?  Undelete has you covered with its Versioning feature.

Have you or your users ever made some changes to a Word document, Excel spreadsheet, or a PowerPoint presentation, saved it and then realized later that what was saved did not contain the previous work? For example, and a true story, a CEO was working on a PowerPoint file he needed for a Board of Directors presentation that afternoon. He had worked about 4 hours that morning making changes and he was careful to periodically save the changes as he worked. The trouble was the last changes he saved had a large part of his previous changes accidentally overwritten.  The CEO then panicked as he just lost a majority of the 4 hours of work he just put in and was not sure he could redo it in time for his presentation deadline. He immediately called up his IT Manager who indicated the nightly backups would not help as they would not contain any of the changes he made that morning. The IT Manager then remembered he had Undelete installed on this file server. This was mainly to recover accidentally deleted files, but he recalled a Versioning feature that would allow recovery of previous versions of Microsoft Office files. He was then able use Undelete to retrieve the previous version of the CEO’s PowerPoint presentation and recover the work he did that morning. The CEO was extremely happy, and the IT Manager was a ‘hero’ to the CEO!

Another very common scenario is users making edits to original files and then selecting “Save” instead of “Save As” and then the original files are now gone. As an example, a customer had a budget file in Excel and several people had accessed it throughout the day. At some point, someone had inadvertently made multiple changes to it for his department, including deleting sections that were not relevant to his department all the while thinking he was working in his own Save-As copy. Boy, were the other department heads upset! The way our IT Admin customer tells the story it sounded like a riot was about to erupt! Well, he swoops in just in time and recovers the earlier version in minutes and saves the day. We hear stories daily about Word document overwrites that IT Admins are able to recover the previous versions of in just a few minutes, saving users hours of having to recreate their work.

While the most popular functionality of Undelete is the ability to recover accidentally deleted files instantly with the click of a mouse, the Undelete Versioning feature is certainly the runner up, so we wanted to remind users, or prospective users, that it’s also here to save the day for you, too.

The Undelete Versioning feature will automatically save the previous versions of specific file types, including Microsoft Office files. The default is to save the last 5 versions, but this is settable.  Undelete then allows you to see what and when versions were saved and are then easily recoverable. A vital data protection feature to have.

If you already have Undelete Server installed on your file servers, check out the Versioning feature. If you have any of your own “hero” stories you would like to share, email custinfo@condusiv.com

If you don’t have Undelete Server or Undelete Pro yet, you can purchase them from your favorite online reseller or you can buy online from our store http://www.condusiv.com/purchase/Undelete/

 

Tags:

Data Protection | Data Recovery | File Recovery | General | Success Stories | Undelete

How to Improve Application Performance by Decreasing Disk Latency like an IT Engineer

by Spencer Allingham 13. June 2018 06:49

You might be responsible for a busy SQL server, for example, or a Web Server; perhaps a busy file and print server, the Finance Department's systems, documentation management, CRM, BI, or something else entirely.

Now, think about WHY these are the workloads that you care about the most?

 

Were YOU responsible for installing the application running the workload for your company? Is the workload being run business critical, or considered TOO BIG TO FAIL?

Or is it simply because users, or even worse, customers, complain about performance?

 

If the last question made you wince, because you know that YOU are responsible for some of the workloads running in your organisation that would benefit from additional performance, please read on. This article is just for you, even if you don't consider yourself a "Techie".

Before we get started, you should know that there are many variables that can affect the performance of the applications that you care about the most. The slowest, most restrictive of these is referred to as the "Bottleneck". Think of water being poured from a bottle. The water can only flow as fast as the neck of the bottle, the 'slowest' part of the bottle.

Don't worry though, in a computer the bottleneck will pretty much always fit into one of the following categories:

•           CPU

•           DISK

•           MEMORY

•           NETWORK

The good news is that if you're running Windows, it is usually very easy to find out which one the bottleneck is in, and here is how to do it (like an IT Engineer):

 •          Open Resource Monitor by clicking the Start menu, typing "resource monitor", and pressing Enter. Microsoft includes this as part of the Windows operating system and it is already installed.

 •          Do you see the graphs in the right-hand pane? When your computer is running at peak load, or users are complaining about performance, which of the graphs are 'maxing out'?

This is a great indicator of where your workload's bottleneck is to be found.         

 

SO, now you have identified the slowest part of your 'compute environment' (continue reading for more details), what can you do to improve it?

The traditional approach to solving computer performance issues has been to throw hardware at the solution. This could be treating yourself to a new laptop, or putting more RAM into your workstation, or on the more extreme end, buying new servers or expensive storage solutions.

BUT, how do you know when it is appropriate to spend money on new or additional hardware, and when it isn't. Well the answer is; 'when you can get the performance that you need', with the existing hardware infrastructure that you have already bought and paid for. You wouldn't replace your car, just because it needed a service, would you?

Let's take disk speed as an example.  Let’s take a look at the response time column in Resource Monitor. Make sure you open the monitor to full screen or large enough to see the data.  Then open the Disk Activity section so you can see the Response Time column.  Do it now on the computer you're using to read this. (You didn't close Resource Monitor yet, did you?) This is showing the Disk Response Time, or put another way, how long is the storage taking to read and write data? Of course, slower disk speed = slower performance, but what is considered good disk speed and bad?

To answer that question, I will refer to a great blog post by Scott Lowe, that you can read here...

https://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-to-monitor-storage-performance/

In it, the author perfectly describes what to expect from faster and slower Disk Response Times:

"Response Time (ms). Disk response time in milliseconds. For this metric, a lower number is definitely better; in general, anything less than 10 ms is considered good performance. If you occasionally go beyond 10 ms, you should be okay, but if the system is consistently waiting more than 20 ms for response from the storage, then you may have a problem that needs attention, and it's likely that users will notice performance degradation. At 50 ms and greater, the problem is serious."

Hopefully when you checked on your computer, the Disk Response Time is below 20 milliseconds. BUT, what about those other workloads that you were thinking about earlier. What's the Disk Response Times on that busy SQL server, the CRM or BI platform, or those Windows servers that the users complain about?

If the Disk Response Times are often higher than 20 milliseconds, and you need to improve the performance, then it's choice time and there are basically two options:

           In my opinion as an IT Engineer, the most sensible option is to use storage workload reduction software like Diskeeper for physical Windows computers, or V-locity for virtualised Windows computers. These will reduce Disk Storage Times by allowing a good percentage of the data that your applications need to read, to come from a RAM cache, rather than slower disk storage. This works because RAM is much faster than the media in your disk storage. Best of all, the only thing you need to do to try it, is download a free copy of the 30 day trial. You don't even have to reboot the computer; just check and see if it is able to bring the Disk Response Times down for the workloads that you care about the most.

           If you have tried the Diskeeper or V-locity software, and you STILL need faster disk access, then, I'm afraid, it's time to start getting quotations for new hardware. It does make sense though, to take a couple of minutes to install Diskeeper or V-locity first, to see if this step can be avoided. The software solution to remove storage inefficiencies is typically a much more cost-effective solution than having to buy hardware!

Visit www.condusiv.com/try to download Diskeeper and V-locity now, for your free trial.

 

How to Recover Lost or Deleted Files BEFORE Resorting to Outsourced Data Recovery

by Gary Quan 1. November 2017 05:46

Here’s a nightmare scenario…a user accidentally deletes irreplaceable or valued files from a network share, and there is no way to recover the data because:

>The file was created or modified then deleted AFTER the last valid backup/snapshot was taken.

>There is NO valid backup or snapshot to recover the data.

>There was NO real-time recovery software like Condusiv’s Undelete® already installed on the file server

>Sending the disk to a professional data recovery center is COSTLY and TIME-CONSUMING.

What do you do? Well, you may be in luck with a little known feature in Condusiv’s Undelete software product known as “Emergency Undelete.” On NTFS (New Technology File system) formatted volumes, which is the default file system used by Windows, there is an unfamiliar characteristic that can be leveraged to recover your lost data.

When a file gets deleted from a Windows volume, the data has not yet been physically removed from the drive. The space where that file data was residing is merely marked as “deleted” or available for use. The original data is there and will remain there until that space is overwritten by new data. That may or may not happen for quite a while. By taking the correct steps, there is an extremely good chance that this ‘deleted’ file can still be recovered. This is where Emergency Undelete comes in.

Emergency Undelete can find deleted files that have not yet been over-written by other files and allow you to recover them. To increase your chances of recovering lost data, here are some best practices to follow as soon as the files have been accidentally deleted.

1. Immediately, reduce or do away with any write activity on the volume(s) you are trying to recover the deleted files from. This will improve your chances of recovering the deleted files.

2. Get Condusiv’s Undelete to leverage its Emergency Undelete feature.  Emergency Undelete is part of the Undelete product package.

3. REMEMBER: You want to prevent any write activity on the volume(s) you are trying to recover the deleted files from, so if you are trying to recover lost files from your system volume, then do one of the following:

a. Copy the Undelete product package to that system, but to a different volume than the one you are recovering lost files from. Run the Undelete install package and it will allow you to run Emergency Undelete directly to recover the lost files.

  

b. If you do not have an extra volume on that system, then place the Undelete product package on a different system, run it and Emergency Undelete will allow you to place the Emergency Undelete package onto a CD or a USB memory stick. You can then place the CD/Memory stick on the system you need to recover from and run it to recover the lost files.

 

 

Now if the lost files do not reside on the system volume, you can just place the Undelete product package on the system volume, run it and select to run Emergency Undelete directly to recover the lost files.

4. When recovering the lost files, recover them to a different volume.

These same steps will also work on FAT (File Allocation Table) formatted storage that is used in many of the memory cards in cameras and phones. So, if some irreplaceable photos or videos were accidentally deleted, you can use these same steps to recover these too. Insert the memory card onto your Windows system, then use Emergency Undelete to recover the lost photos. 

Emergency Undelete has saved highly valuable Microsoft Office documents and priceless photos for thousands of users. It can help in your next emergency, too.

 

Tags:

Data Protection | Data Recovery | Undelete

RecentComments

Comment RSS

Month List

Calendar

<<  December 2018  >>
MoTuWeThFrSaSu
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar