Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

What is unnecessary I/O? Why does it exist?

by Brian Morin 5. November 2013 07:09

Modern IT infrastructures deal with enough I/O traffic as it is. The last thing they need is unnecessary I/O.

It's no surprise that IT struggles with performance problems caused by the tidal wave of data that travels back and forth across the infrastructure in the form of read and write I/O. Organizations that have virtualized find themselves in the position of trading more and more costs to the storage backend to keep up with I/O demand. The negative impact that virtualization has had on the storage layer is felt, that’s for sure, but it isn’t well understood.

With the proliferation of multiple VMs accessing the same bytes of data and the nature of the “I/O blender effect” that further randomizes I/O streams from multiple VMs before funneling down to storage, a large amount of I/O cycles are completely unnecessary. In a world where organizations are already crushed under the weight of I/O demand, the last thing they need from their IT infrastructure is lost cycles spent processing unnecessary I/O.

Even though this random I/O chaos can be easily prevented in the virtual machine layer before it ever leaves the gates, organizations continue to invest in more hardware to battle an increasingly complex problem.

Check out this new paper from IDG, Eliminate the Unnecessary: Unnecessary I/O and its Impact on Performance. You'll understand unnecessary I/O, why it matters, and how getting rid of it will solve performance problems—overnight—without more hardware.

Tags: , , ,

Big Data | Cloud | IntelliMemory | IntelliWrite | SAN | virtualization | V-Locity | VMware

Help! I deleted a file off the network drive!!

by Robin Izsak 31. October 2013 08:01

What if the recycle bin on your clients could be expanded to include file servers? And what if you could enable your users to recover their own files with self-service recovery? You would never have to dig through backups to restore files again, or schedule incessant snapshots to protect data.

One of the most persistent—and annoying—help desk calls is to help users recover files accidentally deleted off network drives, or support users who ‘saved over’ a PowerPoint they need for a meeting—in 15 minutes.

There are some pretty serious holes in true continuous data protection: First, any data that was created between backups might not be recoverable. Second, who wants to dig through backups anyway? Third, you’d have to schedule an insane amount of snapshots to protect every version of every file. Fourth, the Windows recycle bin doesn’t catch files deleted off a network drive, which is how most of us work in the real world—networks, clouds—not local drives.

Check out our latest guide that explains the gap between backup and the Windows recycle bin, and how to bridge that gap with Undelete® to ensure continuous data protection and self-service file recovery.

Meet the recycle bin for file servers. You’re welcome.

NEW V-locity 4 VM Accelerator Improves VM Performance by up to 50%

by Jeff Medina 10. December 2012 10:00

Today we are very excited to announce the release of V-locity 4 VM Accelerator. With this latest release, V-locity increases VM and application performance by up to 50% and does so without any additional storage hardware.

Let’s face it - in today’s world of virtual environments, we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files.

The impact of this data explosion on server virtualization can often lead to I/O bottlenecks. This is because a physical server running multiple virtual machines (VMs) must often carry out far more I/O operations than one server running a single workload, and typical virtualization environments emulate I/O devices that run less efficiently than native I/O devices.

In essence, virtualization acts like a funnel, combining and mixing many disparate I/O streams, sending out to the disk what becomes a very random I/O pattern. To make matters worse, the more VMs are added, the more the issue is compounded as more I/O is "randomized." All of this has a very negative affect on storage performance, and renders time-honored techniques such as read-ahead buffers and caching algorithms far less effective than in conventional physical environments.

Storage I/O is the most critical issue in a virtualized environment, and can cause organizations to spend a great deal on storage, purchasing more and more disk spindles, but often using only a fraction of their capacity because of performance issues. The outcome is that, due to issues relating to performance bottlenecks in the storage infrastructure, some applications are deemed unable to be virtualized; however, a properly tuned storage environment might have accommodated those applications. So what’s the alternative? The answer is V-locity 4 VM Accelerator. 

V-locity 4 VM Accelerator provides:

  • Increased application performance up to 50%
  • Up to 50% faster access to frequently accessed files
  • Faster I/O performance without the cost of additional storage hardware
  • Increased VM density per physical server up to 50%
  • Extended hardware lifespan by eliminating unnecessary I/Os
  • Automatic and real-time operation for true “Set It and Forget It®” management 

What makes V-locity 4 so effective is its powerful toolkit of proactive technologies, including IntelliWrite,® V-Aware,® CogniSAN,® InvisiTasking® and the new IntelliMemory® RAM caching technology.

New! IntelliMemory™ Caching Technology
IntelliMemory intelligent caching technology boosts active data, improving I/O response time up to 50% or more while also eliminating unnecessary I/O operations from getting into the network or storage.

Improved! IntelliWrite® Technology
IntelliWrite automatically prevents the operating system from breaking files into pieces and writing those pieces in a performance penalized manner. This proactive approach improves performance up to 50% or more while preventing any negative impact to snapshots replication, data deduplication or thin provisioning growth. As this proactive approach happens at the server level, the network and shared storage simply has less I/O operations to transfer and process.

New! Performance Benefit Analyzer
The Performance Benefits Analyzer helps document the performance benefits of V-locity. The benefit analyzer looks at your current system performance, then compares these results to those after using V-locity to provide a detailed report showing specific improvements and benefits to your system.

V-Aware® Technology
V-Aware detects external resource usage from other virtual machines on the virtual platform and eliminates resource contention that might slow performance.

CogniSAN® Technology
CogniSAN detects external resource usage within a shared storage system, such as a SAN, and allows for transparent optimization by not competing for resources utilized by other VMs over the same storage infrastructure. And it does this without intruding in any way into SAN-layer operations.

InvisiTasking® Technology
InvisiTaksing allows all the V-locity 4 "background" operations within the VM to run with zero resource impact on current production.

Set It and Forget It®
Automatic and real-time operation.

For more details and a FREE trial, visit www.condusiv.com/products/v-locity or call a sales representative at 1-800-829-6468.

The Secret to Optimizing the Big Virtual Data Explosion

by Alex Klein 29. May 2012 09:21
In today’s day and age, many SMBs and enterprise-level businesses are “taking to the skies” with cloud computing. These companies realize that working in the cloud comes with many benefits – including reduced cost of in-house hardware, ease of implementation and seamless scalability. However, as you will read on and discover - performance-impacting file fragmentation and the need for defragmentation still exists and is actually amplified in these environments. Based on these factors, it must now be addressed with a two-fold proactive and preventative solution.

Let’s face it – we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files. Now regardless of destination, most of this data is generated on Windows-based computers, which are known to fragment files. Therefore, when files are manipulated they become fragmented before even reaching the cloud. This occurs because as they are worked with, they get broken up into various pieces and scattered to numerous locations across the hard disk. The result is increased time necessary to access these files and affects system performance.

So how does the above scenario affect the big picture? To understand this, let’s take a closer look at your cloud environment. Your data, and in many cases, much of your infrastructure, has “gone virtual”. Users are able to access applications and work with their data basically anywhere in the world. In such an atmosphere, where the amount of RAM and CPU power available is dramatically increased and files are no longer stored locally, how can the need for defragmentation still be an issue?

Well, what do you think happens when all this fragmented data comes together? The answer is an alarming amount of fragmented Big Data that’s now sitting on the hard drives of your cloud solution. This causes bottlenecks that can severely impact your mission-critical applications due to the large-scale unnecessary I/O cycles needed to process the broken up information.

At the end of the day, traditional approaches to defragmentation just aren’t going to cut it anymore and it’s going take the latest software technology implemented on both sides of the cloud to get these issues resolved. It starts with software, such as Diskeeper 12, installed on every local workstation and server, to prevent fragmentation at its core. Added to this is deploying V-locity software across your virtualized network. This one-two punch defragmentation software solution addresses I/O performance concerns, optimizes productivity and will push cloud computing further than you ever thought possible. In these exciting times of emerging new technologies, Cloud computing can send your business soaring or keep it grounded - the choice is up to you.

Tags:

Big Data | Cloud | Defrag | Diskeeper | virtualization | V-Locity

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar