Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Undelete Saves Your Bacon, An In-depth Video Series

by Spencer Allingham 13. May 2019 03:43

Undelete® is a lot more than those simple file recovery utilities that just search through free space on Windows machines looking for recoverable data. Undelete does so much more; protecting files in network shared folders and capturing versions of any number of file types.

If you've ever had to rely on restoring from backup or a snapshots to get a deleted file back, watch now to find out how Undelete makes the recovery faster and more convenient on workstations, laptops and Windows servers.

Undelete, the world’s #1 file recovery software, as a first line of defense in your disaster recovery strategy can save your bacon!

“Undelete saved my bacon.” — Ken C, Cleveland State University

Why are some deleted files not in the Windows Recycle Bin?

Were you aware that the Windows Recycle Bin falls short of capturing all file deletions?

Whilst the Recycle Bin is very quick and convenient, it doesn’t capture:

· Files deleted from the Command Prompt

· Files deleted from within some applications

· Files deleted by network users from a Shared Folder

Undelete from Condusiv Technologies can capture ALL deletions, regardless of how they occur.

“It saved our bacon when a file on my system was accidentally deleted from another workstation. That recovery saved hours of work and sold us on the usefulness of the product.”

“Our entire commissions database was saved by the Undelete program. Very happy about that. We would have lost a week of commissions (over 2000 records easily). We were very grateful that we had your product." Frank B, Technical Manager, World Travel, Inc.

Watch this video for a demonstration of why the Recycle Bin falls short and how the Undelete software can pick up the slack and truly become the first line of defense in your disaster recovery strategy. 

What is Undelete File Versioning?

Have you ever accidentally overwritten a Microsoft Word document, spreadsheet or some other file?

Would it be helpful to have several versions of the same file available for recovery in the Windows Recycle Bin? Sorry, but the Recycle Bin can’t do that.

However, the Undelete Recovery Bin can!

“I'm glad I found yours -- it works very well, and the recovery really saved my bacon!” — John

Watch this video to see a demonstration of how capturing several versions of the same file when they get overwritten can really help save time as well as data.

Searching the Undelete Recovery Bin

Recover deleted files quickly and conveniently with Undelete’s easy search functions.

Even if you only know part of the file name, or aren’t sure what folder it was deleted from, see in this video how easy it is to find and recover the file that you need.

“I would recommend undelete as it has saved my bacon a couple of times when I was able to recover something that I deleted by accident.” — Joseph

Inclusion and Exclusion lists in Undelete

Find out how to use Inclusion and Exclusion Lists in the Undelete software to only capture those files that you really might want to recover and exclude all of those files that you don’t really care about.

Have you ever needed to get a file back that was deleted during a Windows Update? Probably not, so why have those files take up space in your Recovery Bin?

“It saved my bacon a few times.” — Jason

Watch this to see how configurable the Undelete Recovery Bin is.

Emergency Undelete Software

See a demonstration showing how easy it is to recover deleted files, even BEFORE you install the Undelete software from Condusiv Technologies.

Prevent that awful moment of extreme realization when you delete a file that isn’t backed up.

Oh! And if you’ve found this page because you need to recover a file right now, click here to get the free 30-day trialware of Undelete. We hope this helps you out of the jam!

“It has saved my bacon a couple of times when I was able to recover something that I deleted by accident.”

How to safely delete files before recycling your computer with Undelete

Want to get a new computer, but worry what would happen to your personal data if you recycled your old one, or sold it?

Watch now to see how to securely wipe your files from your computer’s hard drives with SecureDelete®, which is included in the Undelete software from Condusiv Technologies, before recycling your old computer, selling it, or passing it on to a friend.

We hope these videos help you navigate Undelete like a pro, and perhaps save your bacon, too!

Watch the Series - here!

Tags:

Data Protection | Data Recovery | File Protection | File Recovery | General | Undelete

SysAdmins Discover That Size Really Does Matter

by Spencer Allingham 25. April 2019 03:53

(...to storage transfer speeds...)

 

I was recently asked what could be done to maximize storage transfer speeds in physical and virtual Windows servers. Not the "sexiest" topic for a blog post, I know, but it should be interesting reading for any SysAdmin who wants to get the most performance from their IT environment, or for those IT Administrators who suffer from user or customer complaints about system performance.

 

As it happens, I had just completed some testing on this very subject and thought it would be helpful to share the results publicly in this article.

The crux of the matter comes down to storage I/O size and its effect on data transfer speeds. You can see in this set of results using an NVME-connected SSD (Samsung MZVKW1T0HMLH Model SM961), that the read and write transfer speeds, or put another way, how much data can be transferred each second is MUCH less when the storage I/O sizes are below 64 KB in size:

 

You can see that whilst the transfer rate maxes out at around 1.5 GB per second for writes and around 3.2 GB per second for reads, when the storage I/O sizes are smaller, you don’t see disk transfer speeds at anywhere near that maximum rate. And that’s okay if you’re only saving 4 KB or 8 KB of data, but is definitely NOT okay if you are trying to write a larger amount of data, say 128 KB or a couple of megabytes, and the Windows OS is breaking that down into smaller I/O packets in the background and transferring to and from disk at those much slower transfer rates. This happens way too often and means that the Windows OS is dampening efficiency and transferring your data at a much slower transfer rate than it could, or it should. That can have a very negative impact on the performance of your most important applications, and yes, they are probably the ones that users are accessing the most and are most likely to complain about.

 

The good news of course, is that the V-locity® software from Condusiv® Technologies is designed to prevent these split I/O situations in Windows virtual machines, and Diskeeper® will do the same for physical Windows systems. Installing Condusiv’s software is a quick, easy and effective fix as there is no disruption, no code changes required and no reboots. Just install our software and you are done!

You can even run this test for yourself on your own machine. Download a free copy of ATTO Disk Benchmark from The Web and install it. You can then click its Start button to quickly get a benchmark of how quickly YOUR system processes data transfer speeds at different sizes. I bet you quickly see that when it comes to data transfer speeds, size really does matter!

Out of interest, I enabled our Diskeeper software (I could have used V-locity instead) so that our RAM caching would assist the speed of the read I/O traffic, and the results were pretty amazing. Instead of the reads maxing out at around 3.2 GB per second, they were now maxing out at around a whopping 11 GB per second, more than three times faster. In fact, the ATTO Disk Benchmark software had to change the graph scale for the transfer rate (X-axis) from 4 GB/s to 20 GB/s, just to accommodate the extra GBs per second when the RAM cache was in play. Pretty cool, eh?

 

Of course, it is unrealistic to expect our software’s RAM cache to satisfy ALL of the read I/O traffic in a real live environment as with this lab test, but even if you satisfied only 25% of the reads from RAM in this manner, it certainly wouldn’t hurt performance!!!

If you want to see this for yourself on one of your computers, download the ATTO Disk Benchmark tool from The Web, if you haven’t already, and as mentioned before, run it to get a benchmark for your machine. Then download and install a free trial copy of Diskeeper for physical clients or servers, or V-locity for virtual machines from www.condusiv.com/try and run the ATTO Disk Benchmark tool several times. It will probably take a few runs of the test, but you should easily see the point at which the telemetry in Condusiv’s software identifies the correct data to satisfy from the RAM cache, as the read transfer rates will increase dramatically. They are no longer being confined to the speed of your disk storage, but instead are now happening at the speed of RAM. Much faster, even if that disk storage IS an NVME-connected SSD. And yes, if you’re wondering, this does work with SAN storage and all levels of RAID too!

NOTE: Before testing, make sure you have enough “unused” RAM to cache with. A minimum of 4 GB to 6 GB of Available Physical Memory is perfect.

Whether you have spinning hard drives or SSDs in your storage array, the boost in read data transfer rates can make a real difference. Whatever storage you have serving YOUR Windows computers, it just doesn’t make sense to allow the Windows operating system to continue transferring data at a slower speed than it should. Now with easy to install, “Set It and Forget It®” software from Condusiv Technologiesyou can be sure that you’re getting all of the speed and performance you paid for when you purchased your equipment, through larger, more sequential storage I/O and the benefit of intelligent RAM caching.

If you’re still not sure, run the tests for yourself and see.

Size DOES matter!

Cultech Limited Solves ERP and SQL Troubles with Diskeeper 18 Server

by Spencer Allingham 8. October 2018 09:11

Before discovering Diskeeper®, Cultech Limited experienced sluggish ERP and SQL performance, unnecessary downtime, and lost valuable hours each day troubleshooting issues related to Windows write inefficiencies.

As an internationally recognized innovator and premium quality manufacturer within the nutritional supplement industry, the usual troubleshooting approaches just weren’t cutting it. “We were running a very demanding ERP system on legacy servers and network. A hardware refresh was the first step in troubleshooting our issues. As much as we did see some improvement, it did not solve the daily breakdowns associated with our Sage ERP,” said Rob, IT Manager, Cultech Limited.

After upgrading the network and replacing ERP and SQL servers and not seeing much improvement, Rob further dug into troubleshooting approaches and SQL optimizations. With months of troubleshooting and SQL optimizations and no relief, Rob continued to research and find a way to improve performance issues, knowing that Cultech could not continue to interrupt productivity multiple times a day to fix corrupted records. As Rob explains, “I was on support calls with Sage literally day and night to solve issues that occurred daily. Files would not write properly to the database, and I would have to go through the tedious process of getting all users to logout of Sage then manually correct the problem – a 25-min exercise. That might not be a big deal every so often, but I found myself doing this 3-4 times a day at times.”

In doing his research, Rob found Condusiv’s® Diskeeper Server and decided to give it a try after reading customer testimonials on how it had solved similar performance issues. To Cultech’s surprise, after just 24-hours of being installed, they were no longer calling Sage support. “I installed Diskeeper and crossed my fingers, hoping it would solve at least some of our problems. It didn’t just solve some problems, it solved all of our problems. I was calling Sage support daily then suddenly I wasn’t calling them at all,” said Rob. Problems that Rob was having to fix outside of production hours had been solved thanks to Diskeeper’s ability to prevent fragmentation from occurring. And in addition to recouping hours a day of downtime during production hours, Cultech was now able to focus this time and energy on innovation and producing quality products.

“Now that we have Diskeeper optimizing our Sage servers and SQL servers, we have it running on our other key systems to ensure peak performance and optimum reliability. Instead of considering Windows write inefficiencies as a culprit after trying all else, I would encourage administrators to think of it first,” said Rob.

Read the full case study                        Download 30-day trial

How to make NVMe storage even faster

by Spencer Allingham 4. September 2018 07:21

This is a blog to complement a vlog that I posted a few weeks ago, in which I demonstrated how to use the intelligent RAM caching technology found in the V-locity® software from Condusiv® Technologies to improve the performance that a computer can get from NVMe flash storage. You can view this video here:

 

 A question arose from a couple of long-term customers about whether the use of the V-locity software was still relevant if they started utilizing very fast, flash storage solutions. This was a fair question!

The V-locity software is designed to reduce the amount of unnecessary storage I/O traffic that actually has to go out and be processed by the underlying disk storage layer. It not only reduces the amount of I/O traffic, but it optimizes that which DOES have to go out to disk, and moreover, it further reduces the workload on the storage layer by employing a very intelligent RAM caching strategy.

So, given that flash storage, whilst not only becoming more prevalent in today’s compute environments, can process storage I/O traffic VERY fast when compared to its spinning disk counterparts, and is capable of processing more I/Os per Second (IOPS) than ever before, the very sensible question was this:


"Can the use of Condusiv's V-locity software provide a significant performance increase when using very fast flash storage?"


As I was fortunate to have recently implemented some flash storage in my workstation, I was keen to run an experiment to find out.


SPOILER ALERT: For those of you who just want to have

the question answered, the answer is a resounding YES!

The test showed beyond doubt that with Condusiv’s V-locity software installed, your Windows computer has the ability to process significantly more I/Os per Second, process a much higher throughput of data, and allow the storage I/O heavy workloads running in computers the opportunity to get significantly more work done in the same amount of time – even when using very fast flash storage.

 

For those of you true ‘techies’ that are as geeky as me, read on, and I will detail the testing methodology and results in more detail. 

The storage that I now had in my workstation (and am still happily using!) was a 1 terabyte SM961 POLARIS M.2-2280 PCI-E 3.0 X 4 NVMe solid state drive (SSD).

 

 Is it as fast as it’s made out to be? Well, in this engineer’s opinion – OMG YES!

 

It makes one hell of a difference, when compared to spinning disk drives. This is in part because it’s connected to the computer via a PCI Express (PCIe) bus, as opposed to a SATA bus. The bus is what you connect your disk to in the computer, and different types of buses have different capabilities, such as the speed at which data can be transferred. SATA-connected disks are significantly slower than today’s PCIe-connected storage using an NVMe device interface. There is a great Wiki about this here if you want to read more: 

https://en.wikipedia.org/wiki/NVM_Express

 

To give you an idea of the improvement though, consider that the Advanced Host Controller Interface (AHCI) that is used with the SATA connected disks has one command queue, in which it can process 32 commands. That’s up to 32 storage requests at a time, and that was okay for spinning disk technology, because the disks themselves could only cope with a certain number of storage requests at a time.

NVMe on the other hand doesn’t have one command queue, it has 65,535 queues. AND, each of those command queues can themselves accommodate 65,536 commands. That’s a lot more storage requests that can be processed at the same time! This is really important, because flash storage is capable of processing MANY more storage requests in parallel than its spinning disk cousins. Quite simply NVMe was needed to really make the most of what flash disk hardware can do. You wouldn’t put a kitchen tap (faucet) on the end of a fire hose and expect the same amount of water to flow through it, right? Same principle!

As you can probably tell, I’m quite excited by this boost in storage performance. (I’m strange like that!) And, I know I’m getting a little off topic (apologies), so back to the point!

I had this SUPER-FAST storage solution and needed to prove one way or another if Condusiv’s V-locity software could increase the ability of my computer to process even more workload.

Would my computer be able to process more storage I/Os per Second?

Would my computer be able to process a larger amount of storage I/O traffic (megabytes) every second?

 

Testing Methodology

To answer these questions, I took a virtual machine, and cloned it so that I had two virtual machines that were as identical as I could make them. I then installed Condusiv’s V-locity software on both and disabled V-locity on one of the machines, so that it would process storage I/O traffic, just as if V-locity wasn’t installed.

To generate a storage I/O traffic workload, I turned to my old friend IOMETER. For those of you who might not know IOMETER, this is a software utility originally designed by Intel, but is now open source and available at SourceForge.net. It is designed as an I/O subsystem measurement tool and is great for generating I/O workloads of different types (very customizable!), and measure how quickly that I/O workload can be processed. Great for testing networks or in this case, how fast you can process storage I/O traffic.

I configured IOMETER on both machines with the type of workload that one might find on a typical SQL database server. I KNOW, I know, there is no such thing as a ‘typical’ SQL database, but I wanted a storage I/O profile that was as meaningful as possible, rather than a workload that would just make V-locity look good. Here is the actual IOMETER configuration:

Worker 1 – 16 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Worker 2 – 64 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Test Results

V-locity Disabled

 

V-locity Enabled

 

Summary

 

 

Conclusion

 

In this lab test, the presence of V-locity reduced the average amount of time required to process storage I/O requests by around 65%, allowing a great amount of storage I/O requests to be processed per second and a greater amount of data to be transferred.

To prove beyond doubt that it was indeed V-locity that caused the additional storage I/O traffic to be processed, I stopped the V-locity service. This immediately ‘turned off’ all of the RAM caching and other optimization engines that V-locity was providing, and the net result was that the IOPS and throughput dropped to normal as the underlying storage had to start processing ALL of the storage traffic that IOMETER was generating.

What value is there to reducing storage I/O traffic?

The more you can reduce storage I/O traffic that has to go out and be processed by your disk storage, the more storage I/O headroom you are handing back to your environment for use by additional workloads. It means that your current disk storage can now cope with:

·       - More computers sharing the storage. Great if you have a Storage Area Network (SAN) underpinning your virtualized environment, for example. More VMs running!

 

·       - More users accessing and manipulating the shared storage. The more users you have, the more storage I/O traffic is likely to be generated.

·       - Greater CPU utilization. CPU speeds and processing capacity keeps increasing. Now that the processing power is typically much more than typical needs, V-locity can help your applications become more productive and use more of that processing power by not having to wait so much on the disk storage layer.

 

If you can achieve this without having to replace or upgrade your storage hardware, it not only increases the return on your current storage hardware investment, but also might allow you to keep that storage running for a longer period of time (if you’re not on a fixed refresh cycle).

Sweat the storage asset!

(I hate that term, but you get the idea)

When you do finally need to replace your current storage, perhaps it won’t be as costly as you thought because you’re not having to OVER-PROVISION the storage as much, to cope with all of the excess, unnecessary storage traffic that Condusiv’s V-locity software can eliminate.

I typically see a storage traffic reduction of at least 25% at customer sites.

AND, I haven’t even mentioned the performance boost that many workloads receive from the RAM caching technology provided by Condusiv’s V-locity software. It is worth remembering that as fast as today’s flash storage solutions are, the RAM that you have in your computers is faster! The greater the percentage of read I/O traffic that you can satisfy from RAM instead of the storage layer, the better performing those storage I/O-hungry applications are likely to be.

What type of applications benefit the most?

In the real world, V-locity is not a silver-bullet for all types of workloads, and I wouldn’t insult your intelligence by saying that it was. If you have some workloads that don’t generate a great deal of storage I/O traffic, perhaps a DNS server, or DHCP server, well, V-locity isn’t likely to make a huge difference. That’s my honest opinion as an IT Engineer.

HOWEVER, if you are using storage I/O-hungry applications, then you really should give it a try.

Here are just some examples of the workloads that thousands of V-locity customers are ‘performance-boosting’ with Condusiv’s I/O reduction and RAM caching technologies:

  • -Database solutions such as Microsoft SQL Server, Oracle, MySQL, SQL Express, and others.
  • -Virtualization solution such as Microsoft Hyper-V and VMware.
  • -Enterprise Resource Planning (ERP) solutions like Epicor.
  • -Business Intelligence (BI) solutions like IBM Cognos.
  • -Finance and payroll solutions like SAGE Accounting.
  • -Electronic Health Records (EHR) solutions, such as MEDITECH 
  • -Customer Relationship Management (CRM) solutions, such as Microsoft Dynamics.
  • -Learning Management Systems (LMS Solutions.
  • -Not to mention email servers like Microsoft Exchange AND busy file servers.

 

 

Do you use any of these in your IT environment?

 

There are case studies on the Condusiv web site for all of these workload types (and more), here:

http://www.condusiv.com/knowledge-center/case-studies/default.aspx

 

Try it for yourself

You can experience the full power of Condusiv’s V-locity software for yourself, in YOUR Windows environment within a couple of minutes. Just go to www.condusiv.com/try and get a copy of the fully-featured 30-day trialware. You can check the dashboard reporting after a week or two and see just how much storage I/O traffic has been eliminated, and more importantly, how much storage time has been saved by doing do.

It really is that simple!

You don’t even need to reboot to make the software work. There is no disruption to live running workloads; you can just install and uninstall at will, and it only takes a minute or so.


You will typically start seeing results just minutes after installing.

I hope that this has been interesting and helpful. If you have any questions about the technologies within V-locity or have any questions about testing, feel free to email me directly at sallingham@condusiv.co.uk.

 

I will be delighted to hear from you!

 

 

Solving the IO Blender Effect with Software-Based Caching

by Spencer Allingham 5. July 2018 07:30

First, let me explain exactly what the IO Blender Effect is, and why it causes a problem in virtualized environments such as those from VMware or Microsoft’s Hyper-V.



This is typically what storage IO traffic would look like when everything is working well. You have the least number of storage IO packets, each carrying a large payload of data down to the storage. Because the data is arriving in large chunks at a time, the storage controller has the opportunity to create large stripes across its media, using the least number of storage-level operations before being able to acknowledge that the write has been successful.



Unfortunately, all too often the Windows Write Driver is forced to split data that it’s writing into many more, much smaller IO packets. These split IO situations cause data to be transferred far less efficiently, and this adds overhead to each write and subsequent read. Now that the storage controller is only receiving data in much smaller chunks at a time, it can only create much smaller stripes across its media, meaning many more storage operations are required to process each gigabyte of storage IO traffic.


This is not only true when writing data, but also if you need to read that data back at some later time.

But what does this really mean in real-world terms?

It means that an average gigabyte of storage IO traffic that should take perhaps 2,000 or 3,000 storage IO packets to complete, is now taking 30,000, or 40,000 storage IO packets instead. The data transfer has been split into many more, much smaller, fractured IO packets. Each storage IO operation that has to be generated takes a measurable amount of time and system resource to process, and so this is bad for performance! It will cause your workloads to run slower than they should, and this will worsen over time unless you perform some time and resource-costly maintenance.

So, what about the IO Blender Effect?

Well, the IO Blender Effect can amplify the performance penalty (or Windows IO Performance Tax) in a virtualized environment. Here’s how it works…

 

As the small, fractured IO traffic from several virtual machines passes through the physical host hypervisor (Hyper-V server or VMware ESX server), the hypervisor acts like a blender. It mixes these IO streams, which causes a randomization of the storage IO packets, before sending out what is now a chaotic mess of small, fractured and now very random IO streams out to the storage controller.

It doesn’t matter what type of storage you have on the back-end. It could be direct attached disks in the physical host machine, or a Storage Area Network (SAN), this type of storage IO profile couldn’t be less storage-friendly.

The storage is now only receiving data in small chunks at a time, and won’t understand the relationship between the packets, so it now only has the opportunity to create very small stripes across its media, and that unfortunately means many more storage operations are required before it can send an acknowledgement of the data transfer back up to the Windows operating system that originated it.

How can RAM caching alleviate the problem?

 

Firstly, to be truly effective the RAM caching needs to be done at the Windows operating system layer. This provides the shortest IO path for read IO requests that can be satisfied from server-side RAM, provisioned to each virtual machine. By satisfying as many “Hot Reads” from RAM as possible, you now have a situation where not only are those read requests being satisfied faster, but those requests are now not having to go out to storage. That means less storage IO packets for the hypervisor to blend.

Furthermore, the V-locity® caching software from Condusiv Technologies also employs a patented technology called IntelliWrite®. This intelligently helps the Windows Write Driver make better choices when writing data out to disk, which avoids many of the split IO situations that would then be made worse by the IO Blender Effect. You now get back to that ideal situation of healthy IO; large, sequential writes and reads.

Is RAM caching a disruptive solution?

 

No! Not at all, if done properly.

Condusiv’s V-locity software for virtualised environments is completely non-disruptive to live, running workloads such as SQL Servers, Microsoft Dynamics, Business Information (BI) solutions such as IBM Cognos, or other important workloads such as SAP, Oracle and the such.

In fact, all you need to do to test this for yourself is download a free trialware copy from:

www.condusiv.com/try

Just install it! There are no reboots required, and it will start working in just a couple of minutes. If you decide that it isn’t for you, then uninstall it just as easily. No reboots, no disruption!


RecentComments

Comment RSS

Month List

Calendar

<<  June 2019  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar