Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

A Blog About Bloggers Who Blog About Us

by Jerry Baldwin 7. February 2013 09:49

On contemplating the impact of his calculating engine, the world’s first computer, Charles Babbage wrote “In turning from the smaller instruments in frequent use to the larger and more important machines, the economy arising from the increase of velocity becomes more striking.” He said that in 1832.

I mention this because the idea holds true today—the bigness of everything, the immediacy of everything, the pace of everything—the greater the increase from one state to another, the more striking the difference. And that’s exactly why—when we put V-locity 4 trialware into the hands of virtualization wizards to test in their lairs—we want them to really, really put it through the wringer. The heavier the workload, the greater the application demand, the more striking the results.

Recently two virtualization pros got their hands on the V-locity 4 30-day trial, set up rigorous testing, and blogged the entire experience:

VMware technical architect amazed by V-locity 4 results

Another virtualization blogger amazed by V-locity 4

 

Tags: , , ,

Big Data | Hyper-V | IntelliMemory | virtualization | V-Locity | VMware

NEW V-locity 4 VM Accelerator Improves VM Performance by up to 50%

by Jeff Medina 10. December 2012 10:00

Today we are very excited to announce the release of V-locity 4 VM Accelerator. With this latest release, V-locity increases VM and application performance by up to 50% and does so without any additional storage hardware.

Let’s face it - in today’s world of virtual environments, we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files.

The impact of this data explosion on server virtualization can often lead to I/O bottlenecks. This is because a physical server running multiple virtual machines (VMs) must often carry out far more I/O operations than one server running a single workload, and typical virtualization environments emulate I/O devices that run less efficiently than native I/O devices.

In essence, virtualization acts like a funnel, combining and mixing many disparate I/O streams, sending out to the disk what becomes a very random I/O pattern. To make matters worse, the more VMs are added, the more the issue is compounded as more I/O is "randomized." All of this has a very negative affect on storage performance, and renders time-honored techniques such as read-ahead buffers and caching algorithms far less effective than in conventional physical environments.

Storage I/O is the most critical issue in a virtualized environment, and can cause organizations to spend a great deal on storage, purchasing more and more disk spindles, but often using only a fraction of their capacity because of performance issues. The outcome is that, due to issues relating to performance bottlenecks in the storage infrastructure, some applications are deemed unable to be virtualized; however, a properly tuned storage environment might have accommodated those applications. So what’s the alternative? The answer is V-locity 4 VM Accelerator. 

V-locity 4 VM Accelerator provides:

  • Increased application performance up to 50%
  • Up to 50% faster access to frequently accessed files
  • Faster I/O performance without the cost of additional storage hardware
  • Increased VM density per physical server up to 50%
  • Extended hardware lifespan by eliminating unnecessary I/Os
  • Automatic and real-time operation for true “Set It and Forget It®” management 

What makes V-locity 4 so effective is its powerful toolkit of proactive technologies, including IntelliWrite,® V-Aware,® CogniSAN,® InvisiTasking® and the new IntelliMemory® RAM caching technology.

New! IntelliMemory™ Caching Technology
IntelliMemory intelligent caching technology boosts active data, improving I/O response time up to 50% or more while also eliminating unnecessary I/O operations from getting into the network or storage.

Improved! IntelliWrite® Technology
IntelliWrite automatically prevents the operating system from breaking files into pieces and writing those pieces in a performance penalized manner. This proactive approach improves performance up to 50% or more while preventing any negative impact to snapshots replication, data deduplication or thin provisioning growth. As this proactive approach happens at the server level, the network and shared storage simply has less I/O operations to transfer and process.

New! Performance Benefit Analyzer
The Performance Benefits Analyzer helps document the performance benefits of V-locity. The benefit analyzer looks at your current system performance, then compares these results to those after using V-locity to provide a detailed report showing specific improvements and benefits to your system.

V-Aware® Technology
V-Aware detects external resource usage from other virtual machines on the virtual platform and eliminates resource contention that might slow performance.

CogniSAN® Technology
CogniSAN detects external resource usage within a shared storage system, such as a SAN, and allows for transparent optimization by not competing for resources utilized by other VMs over the same storage infrastructure. And it does this without intruding in any way into SAN-layer operations.

InvisiTasking® Technology
InvisiTaksing allows all the V-locity 4 "background" operations within the VM to run with zero resource impact on current production.

Set It and Forget It®
Automatic and real-time operation.

For more details and a FREE trial, visit www.condusiv.com/products/v-locity or call a sales representative at 1-800-829-6468.

The Secret to Optimizing the Big Virtual Data Explosion

by Alex Klein 29. May 2012 09:21
In today’s day and age, many SMBs and enterprise-level businesses are “taking to the skies” with cloud computing. These companies realize that working in the cloud comes with many benefits – including reduced cost of in-house hardware, ease of implementation and seamless scalability. However, as you will read on and discover - performance-impacting file fragmentation and the need for defragmentation still exists and is actually amplified in these environments. Based on these factors, it must now be addressed with a two-fold proactive and preventative solution.

Let’s face it – we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files. Now regardless of destination, most of this data is generated on Windows-based computers, which are known to fragment files. Therefore, when files are manipulated they become fragmented before even reaching the cloud. This occurs because as they are worked with, they get broken up into various pieces and scattered to numerous locations across the hard disk. The result is increased time necessary to access these files and affects system performance.

So how does the above scenario affect the big picture? To understand this, let’s take a closer look at your cloud environment. Your data, and in many cases, much of your infrastructure, has “gone virtual”. Users are able to access applications and work with their data basically anywhere in the world. In such an atmosphere, where the amount of RAM and CPU power available is dramatically increased and files are no longer stored locally, how can the need for defragmentation still be an issue?

Well, what do you think happens when all this fragmented data comes together? The answer is an alarming amount of fragmented Big Data that’s now sitting on the hard drives of your cloud solution. This causes bottlenecks that can severely impact your mission-critical applications due to the large-scale unnecessary I/O cycles needed to process the broken up information.

At the end of the day, traditional approaches to defragmentation just aren’t going to cut it anymore and it’s going take the latest software technology implemented on both sides of the cloud to get these issues resolved. It starts with software, such as Diskeeper 12, installed on every local workstation and server, to prevent fragmentation at its core. Added to this is deploying V-locity software across your virtualized network. This one-two punch defragmentation software solution addresses I/O performance concerns, optimizes productivity and will push cloud computing further than you ever thought possible. In these exciting times of emerging new technologies, Cloud computing can send your business soaring or keep it grounded - the choice is up to you.

Tags:

Big Data | Cloud | Defrag | Diskeeper | virtualization | V-Locity

Evaluating IntelliWrite In Your Environment

by Damian 1. March 2012 10:18

IntelliWrite technology has been around for about two years now, optimizing literally millions of systems worldwide. It seamlessly integrates with Windows, delivering optimized writes upon initial I/O (no need for additional, after-the-fact file movement). What does that translate to? Actual fragmentation prevention.

Interestingly, we do occasionally get asked how it bears up against modern storage technologies:

“Don’t the latest SANs optimize themselves?”

“Do I really need this on my VMs? They aren’t physical hard drives, you realize…”

Or even…

“I don’t need to defragment my SAN-hosted VMs.”

Now, there are some factors which must be considered when you’re looking at optimizing I/O in your infrastructure:

  • I/O from Windows is just abstracted Reads and Writes from a higher layer, even directly over a bare metal disk.
  • Due to the way current Windows file systems are structured, I/O can be greatly constrained by file fragmentation—no matter what storage lies underneath it.
  • Fragmentation in Windows means more I/O requests from Windows—even if files are stored perfectly contiguously at the SAN level, Windows still has to send X amount of requests because of the fragmentation that it sees within its top level.
  • File fragmentation is not the same as block-level (read: SAN-level) fragmentation. Many SAN utilities resolve issues of block-level fragmentation admirably; these do not address file fragmentation.
  • Finally, and as noted above, IntelliWrite prevents fragmentation in real time by improving Windows “Best Fit” file write logic. This means solving file fragmentation issues with no additional writes which could create issues with SAN de-dup or various copy-on-write data redundancy measures.

We performed testing with a customer recently in order to validate the benefits of IntelliWrite over cutting-edge storage. This customer’s SAN array is less than a year old, and while we don’t want to go into specifics in order to avoid seeming partial, it’s from one of today’s leading SAN vendors.

Testing involved apples to apples comparison on a production VM hosted over the SAN. A non-random workload was generated 3 times, recording Windows-level file fragmentation, several PerfMon metrics, and time to complete the workload. The test was then repeated 3 times, now with IntelliWrite enabled on the same VM’s test volume.

Here were the results:

 

 

The breakdown:

Fragmentation reduction with IntelliWrite: 89%

Split IO/sec reduction with IntelliWrite: 81%

Avg. Disk Queue Length reduction with IntelliWrite: 71%

…and with the improvement to these disk performance metrics, the overall time to complete the same actual file operations was reduced by: 48%

The conclusion? If you were asking the same sorts of questions posed earlier, evaluate IntelliWrite for yourself. Remember, the graphs above are on contemporary storage hardware—the older your storage equipment, the greater the improvement in application performance you can expect from investing in optimization. Can you afford to not be seeing maximum performance numbers out of your infrastructure and application investments?

The evaluation is quick and fully transparent. Call today to speak with a representative about evaluating Diskeeper or V-locity in your environment.

Tags: , ,

Diskeeper | IntelliWrite | SAN | V-Locity

Webinar: Physical vs. Virtual Bottlenecks: What You Really Need To Know

by Damian 20. February 2012 07:05

Diskeeper Corporation recently delivered a live webinar hosted by Ziff Davis Enterprise. The principle topics covered were:

  • Measuring performance loss in Windows over SAN
  • Identifying client-side performance bottlenecks in private clouds
  • Expanding performance awareness to the client level
  • The greatest and often-overlooked performance issue in a virtual ecosystem

The webinar was co-hosted by:

  • Stephen Deming, Microsoft Partner Solution Advisor
  • Damian Giannunzio, Diskeeper Corporation Field Sales & Application Engineer

Don't miss out on this critical data! If you missed the webinar, you can view the recorded version online here.

Here are some additional, relevant resources:

White Paper: Diskeeper 2011: Improving the Performance of SAN Storage

White Paper: Increasing Efficiency in the IT Environment

White Paper: Inside Diskeeper 2011 with IntelliWrite

White Paper: Running Diskeeper and V-locity on SAN Devices 

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar