Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Big Data Boom Brings Promises, Problems

by Dawn Richcreek 7. September 2018 04:40

By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed by this “data tsunami”? Will the sheer ability to manage—or even to physically house—all this information become a problem?

Condusiv CEO Jim D’Arezzo, in a recent discussion with Supply Chain Brain, commented that “As it has over the past 40 years, technology will become faster, cheaper, and more expansive; we’ll be able to store all the data we create. The challenge, however, is not just housing the data, but moving and processing it. The components are storage, computing, and network. All three need to be optimized; I don’t see any looming insurmountable problems, but there will be some bumps along the road.”

One example is healthcare. Speaking with Healthcare IT News, D’Arezzo noted that there are many new solutions open to healthcare providers today. “But with all the progress,” he said, “come IT issues. Improvements in medical imaging, for instance, create massive amounts of data; as the quantity of available data balloons, so does the need for processing capability.”

Giving health-care providers—and professionals in other areas—the benefits of the data they collect is not always easy. In an interview with Transforming Data with Intelligence, D’Arezzo said, “Data center consolidation and updating is a challenge. We run into cases where organizations do consolidation on a ‘forklift’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often discover that performance has degraded. A bottleneck has been created that needs to be handled with optimization.”

The news is all over it. You are experiencing it. Big data. Big problems. At Condusiv®, we get it.  We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware. The tsunami of data—we’ve got you covered.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

How to make NVMe storage even faster

by Spencer Allingham 4. September 2018 07:21

This is a blog to complement a vlog that I posted a few weeks ago, in which I demonstrated how to use the intelligent RAM caching technology found in the V-locity® software from Condusiv® Technologies to improve the performance that a computer can get from NVMe flash storage. You can view this video here:

 

 A question arose from a couple of long-term customers about whether the use of the V-locity software was still relevant if they started utilizing very fast, flash storage solutions. This was a fair question!

The V-locity software is designed to reduce the amount of unnecessary storage I/O traffic that actually has to go out and be processed by the underlying disk storage layer. It not only reduces the amount of I/O traffic, but it optimizes that which DOES have to go out to disk, and moreover, it further reduces the workload on the storage layer by employing a very intelligent RAM caching strategy.

So, given that flash storage, whilst not only becoming more prevalent in today’s compute environments, can process storage I/O traffic VERY fast when compared to its spinning disk counterparts, and is capable of processing more I/Os per Second (IOPS) than ever before, the very sensible question was this:


"Can the use of Condusiv's V-locity software provide a significant performance increase when using very fast flash storage?"


As I was fortunate to have recently implemented some flash storage in my workstation, I was keen to run an experiment to find out.


SPOILER ALERT: For those of you who just want to have

the question answered, the answer is a resounding YES!

The test showed beyond doubt that with Condusiv’s V-locity software installed, your Windows computer has the ability to process significantly more I/Os per Second, process a much higher throughput of data, and allow the storage I/O heavy workloads running in computers the opportunity to get significantly more work done in the same amount of time – even when using very fast flash storage.

 

For those of you true ‘techies’ that are as geeky as me, read on, and I will detail the testing methodology and results in more detail. 

The storage that I now had in my workstation (and am still happily using!) was a 1 terabyte SM961 POLARIS M.2-2280 PCI-E 3.0 X 4 NVMe solid state drive (SSD).

 

 Is it as fast as it’s made out to be? Well, in this engineer’s opinion – OMG YES!

 

It makes one hell of a difference, when compared to spinning disk drives. This is in part because it’s connected to the computer via a PCI Express (PCIe) bus, as opposed to a SATA bus. The bus is what you connect your disk to in the computer, and different types of buses have different capabilities, such as the speed at which data can be transferred. SATA-connected disks are significantly slower than today’s PCIe-connected storage using an NVMe device interface. There is a great Wiki about this here if you want to read more: 

https://en.wikipedia.org/wiki/NVM_Express

 

To give you an idea of the improvement though, consider that the Advanced Host Controller Interface (AHCI) that is used with the SATA connected disks has one command queue, in which it can process 32 commands. That’s up to 32 storage requests at a time, and that was okay for spinning disk technology, because the disks themselves could only cope with a certain number of storage requests at a time.

NVMe on the other hand doesn’t have one command queue, it has 65,535 queues. AND, each of those command queues can themselves accommodate 65,536 commands. That’s a lot more storage requests that can be processed at the same time! This is really important, because flash storage is capable of processing MANY more storage requests in parallel than its spinning disk cousins. Quite simply NVMe was needed to really make the most of what flash disk hardware can do. You wouldn’t put a kitchen tap (faucet) on the end of a fire hose and expect the same amount of water to flow through it, right? Same principle!

As you can probably tell, I’m quite excited by this boost in storage performance. (I’m strange like that!) And, I know I’m getting a little off topic (apologies), so back to the point!

I had this SUPER-FAST storage solution and needed to prove one way or another if Condusiv’s V-locity software could increase the ability of my computer to process even more workload.

Would my computer be able to process more storage I/Os per Second?

Would my computer be able to process a larger amount of storage I/O traffic (megabytes) every second?

 

Testing Methodology

To answer these questions, I took a virtual machine, and cloned it so that I had two virtual machines that were as identical as I could make them. I then installed Condusiv’s V-locity software on both and disabled V-locity on one of the machines, so that it would process storage I/O traffic, just as if V-locity wasn’t installed.

To generate a storage I/O traffic workload, I turned to my old friend IOMETER. For those of you who might not know IOMETER, this is a software utility originally designed by Intel, but is now open source and available at SourceForge.net. It is designed as an I/O subsystem measurement tool and is great for generating I/O workloads of different types (very customizable!), and measure how quickly that I/O workload can be processed. Great for testing networks or in this case, how fast you can process storage I/O traffic.

I configured IOMETER on both machines with the type of workload that one might find on a typical SQL database server. I KNOW, I know, there is no such thing as a ‘typical’ SQL database, but I wanted a storage I/O profile that was as meaningful as possible, rather than a workload that would just make V-locity look good. Here is the actual IOMETER configuration:

Worker 1 – 16 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Worker 2 – 64 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Test Results

V-locity Disabled

 

V-locity Enabled

 

Summary

 

 

Conclusion

 

In this lab test, the presence of V-locity reduced the average amount of time required to process storage I/O requests by around 65%, allowing a great amount of storage I/O requests to be processed per second and a greater amount of data to be transferred.

To prove beyond doubt that it was indeed V-locity that caused the additional storage I/O traffic to be processed, I stopped the V-locity service. This immediately ‘turned off’ all of the RAM caching and other optimization engines that V-locity was providing, and the net result was that the IOPS and throughput dropped to normal as the underlying storage had to start processing ALL of the storage traffic that IOMETER was generating.

What value is there to reducing storage I/O traffic?

The more you can reduce storage I/O traffic that has to go out and be processed by your disk storage, the more storage I/O headroom you are handing back to your environment for use by additional workloads. It means that your current disk storage can now cope with:

·       - More computers sharing the storage. Great if you have a Storage Area Network (SAN) underpinning your virtualized environment, for example. More VMs running!

 

·       - More users accessing and manipulating the shared storage. The more users you have, the more storage I/O traffic is likely to be generated.

·       - Greater CPU utilization. CPU speeds and processing capacity keeps increasing. Now that the processing power is typically much more than typical needs, V-locity can help your applications become more productive and use more of that processing power by not having to wait so much on the disk storage layer.

 

If you can achieve this without having to replace or upgrade your storage hardware, it not only increases the return on your current storage hardware investment, but also might allow you to keep that storage running for a longer period of time (if you’re not on a fixed refresh cycle).

Sweat the storage asset!

(I hate that term, but you get the idea)

When you do finally need to replace your current storage, perhaps it won’t be as costly as you thought because you’re not having to OVER-PROVISION the storage as much, to cope with all of the excess, unnecessary storage traffic that Condusiv’s V-locity software can eliminate.

I typically see a storage traffic reduction of at least 25% at customer sites.

AND, I haven’t even mentioned the performance boost that many workloads receive from the RAM caching technology provided by Condusiv’s V-locity software. It is worth remembering that as fast as today’s flash storage solutions are, the RAM that you have in your computers is faster! The greater the percentage of read I/O traffic that you can satisfy from RAM instead of the storage layer, the better performing those storage I/O-hungry applications are likely to be.

What type of applications benefit the most?

In the real world, V-locity is not a silver-bullet for all types of workloads, and I wouldn’t insult your intelligence by saying that it was. If you have some workloads that don’t generate a great deal of storage I/O traffic, perhaps a DNS server, or DHCP server, well, V-locity isn’t likely to make a huge difference. That’s my honest opinion as an IT Engineer.

HOWEVER, if you are using storage I/O-hungry applications, then you really should give it a try.

Here are just some examples of the workloads that thousands of V-locity customers are ‘performance-boosting’ with Condusiv’s I/O reduction and RAM caching technologies:

  • -Database solutions such as Microsoft SQL Server, Oracle, MySQL, SQL Express, and others.
  • -Virtualization solution such as Microsoft Hyper-V and VMware.
  • -Enterprise Resource Planning (ERP) solutions like Epicor.
  • -Business Intelligence (BI) solutions like IBM Cognos.
  • -Finance and payroll solutions like SAGE Accounting.
  • -Electronic Health Records (EHR) solutions, such as MEDITECH 
  • -Customer Relationship Management (CRM) solutions, such as Microsoft Dynamics.
  • -Learning Management Systems (LMS Solutions.
  • -Not to mention email servers like Microsoft Exchange AND busy file servers.

 

 

Do you use any of these in your IT environment?

 

There are case studies on the Condusiv web site for all of these workload types (and more), here:

http://www.condusiv.com/knowledge-center/case-studies/default.aspx

 

Try it for yourself

You can experience the full power of Condusiv’s V-locity software for yourself, in YOUR Windows environment within a couple of minutes. Just go to www.condusiv.com/try and get a copy of the fully-featured 30-day trialware. You can check the dashboard reporting after a week or two and see just how much storage I/O traffic has been eliminated, and more importantly, how much storage time has been saved by doing do.

It really is that simple!

You don’t even need to reboot to make the software work. There is no disruption to live running workloads; you can just install and uninstall at will, and it only takes a minute or so.


You will typically start seeing results just minutes after installing.

I hope that this has been interesting and helpful. If you have any questions about the technologies within V-locity or have any questions about testing, feel free to email me directly at sallingham@condusiv.co.uk.

 

I will be delighted to hear from you!

 

 

Undelete Can Do That Too?

by Gary Quan 30. August 2017 04:46

You may have already heard countless customers tout the file recovery features of Condusiv’s Undelete® and how IT Pros use it as a recycle bin on file servers for real-time protection, so they don’t have to dig through backups to recover deleted or overwritten files. Although this is Undelete’s primary function, Undelete provides more than just this. 

What most people do not know is that Undelete also provides features to keep your data secure and visibility into who deletes files from file servers.

When a file is deleted, many assume that file data is now safe from being seen by others. Not so fast. When data gets deleted on a Windows volume, the data does not get removed. The space where that file data was residing is just marked as available for use, but the original file data is still there and will remain there until that space is overwritten by some other file data. That may or may not happen for quite a while. This means, that ‘deleted’ file data could still be potentially read. 

So, what do you do if you really want your file data gone when you delete it?  Undelete has the answer with two features. The first feature is “SecureDelete.”  When a file is deleted, SecureDelete will first over-write the file to help ensure it is unrecoverable. In fact, this is done by overwriting it with a specific bit pattern specified for this purpose by the U.S. National Security Agency (NSA) for the Department of Defense (DOD).  The second feature is “Wipe Free Space”, which will overwrite any free space on a selected volume, using the same specific bit patterns as SecureDelete to clear out any previously written data in that free space.

Now, with these two features, when you delete a file, you know it is now virtually impossible to read/recover any of that data from that volume.

Along with file security, there are customers using Undelete as another precautionary security: check how many files are being deleted from file shares and by whom. If they ever detect an abnormal, substantially high number of files being deleted, that raises a flag for them to investigate further.

Although Undelete is usually purchased to recover files, others use it to securely delete files and track back any deleted files to the person who did it.

Tags:

Data Protection | File Protection | Undelete

Help! I deleted a file off the network drive!!

by Robin Izsak 31. October 2013 08:01

What if the recycle bin on your clients could be expanded to include file servers? And what if you could enable your users to recover their own files with self-service recovery? You would never have to dig through backups to restore files again, or schedule incessant snapshots to protect data.

One of the most persistent—and annoying—help desk calls is to help users recover files accidentally deleted off network drives, or support users who ‘saved over’ a PowerPoint they need for a meeting—in 15 minutes.

There are some pretty serious holes in true continuous data protection: First, any data that was created between backups might not be recoverable. Second, who wants to dig through backups anyway? Third, you’d have to schedule an insane amount of snapshots to protect every version of every file. Fourth, the Windows recycle bin doesn’t catch files deleted off a network drive, which is how most of us work in the real world—networks, clouds—not local drives.

Check out our latest guide that explains the gap between backup and the Windows recycle bin, and how to bridge that gap with Undelete® to ensure continuous data protection and self-service file recovery.

Meet the recycle bin for file servers. You’re welcome.

The Next Generation of Real-time Protection and Instant Recovery Software

by Alex Klein 4. September 2012 04:00

Today we announce the worldwide release of Undelete 10 – our real-time data protection and instant data recovery product. With just a touch of a button, Undelete instantly recovers files from Windows servers and workstations – even files that were deleted before Undelete was installed.

Enterprise IT and data have grown immensely since Condusiv first introduced Undelete over 14 years ago. Regardless of the Windows OS you’re running, whether it be a physical or virtual environment – Undelete is the one piece of software your company can’t afford to be without and can truly turn your IT department into a team of heroes.

“I came by Undelete when we had a user delete a whole department worth of files”, says Eric Tremelling, who’s the IT Manager at The Legal Aid Society of Palm Beach County. “It did recover all the files for the whole department.  I was so impressed that I purchased the Undelete Server edition to avoid future problems.  In my opinion, Undelete is one of the best programs we have and saves me lots of potential headaches.” 

“We use a backup system that does nightly backups to tape.  If something is deleted from the file server by mistake I have to go back to the tapes then pull the correct tape then run a restore on the file, provided there was no corruption or problems with the backup tape.  Another issue was that the deleted file could be gone before anyone noticed it was missing and I may not still have a backup that old to recover from.  Furthermore, if the file was created during the day before the backup runs at night, it would be gone because it never got backed up yet. The software has gotten us out of many jams”

 

Have an Undelete story to share? Leave a comment below – we’d love to hear from you!

Undelete 10 features

  • New One-button Search for Recent Files, which allows the user to locate a file deleted within a 24-hour period or one week period with one click.
     
  • New Search Wizard, a single pane view that provides fast and easy way to find a lost file.
     
  • New Dynamic User Interface for ease of use, quality of experience.
     
  • Undelete 10 Server - Protects server files including those deleted by network clients from a centralized management console.
     
  • Undelete 10 Desktop Client – Allows connected laptops, workstations and VMs to recover their own files from remote Undelete 10 Server recovery bins.
     
  • Undelete 10 Professional – Protects locally stored files and allows files to be recovered from remote Undelete Server recovery bins.
     
  • Undelete 10 Home – Provides comprehensive protection of locally stored files.

When a file is deleted, it is automatically captured and stored in the Undelete Recovery Bin. Undelete 10 captures all the files the Windows Recycle Bin misses, such as those deleted from shared network folders, deleted from commonly used applications, deleted by the Windows command prompt, or replaced when newer versions of a file are saved. Also, if a file is modified several times between a backup or shadow copy, it will not be saved. With Undelete, these file versions will be saved and are recoverable.

The Server, Professional and Client editions of Undelete let you see the contents of Recovery Bins on remote computers like file servers, allowing IT or users to recover their deleted files in seconds anywhere across the network with a single click of a button. It’s no longer necessary to spend hours searching backup tapes or Windows Shadow copies when a user accidentally deletes a file from the server.

Undelete can also restore files previously purged from the Recycle Bin or the Undelete Recovery Bin – even if they were deleted before Undelete was installed

Condusiv Undelete 10 “Set it and Forget It”® file recovery system runs on all Windows platforms, including those on VMware and Microsoft Hyper-V environments. Undelete 10 also supports Exchange, SQL, or SharePoint. 

Undelete 10 Server edition supports Windows Server 2008/2008 R2, Windows Server 2003,  Windows XP, Windows Vista, and Windows 7.

Undelete 10 Client, Professional and Home Editions support Windows XP, Windows Vista, and Windows 7.

In compliance with corporate governance or governmental regulatory requirements for secure data deletion, Undelete provides an electronic data shedder: SecureDelete® 2.0. Using a bit pattern specified by the National Security Agency (NSA) for the Department of Defense, SecureDelete not only deletes a file but overwrites the disk space the file previously occupied making it virtually impossible for anyone to access.

RecentComments

Comment RSS

Month List

Calendar

<<  October 2018  >>
MoTuWeThFrSaSu
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

View posts in large calendar