Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

A Blog About Bloggers Who Blog About Us

by Jerry Baldwin 7. February 2013 09:49

On contemplating the impact of his calculating engine, the world’s first computer, Charles Babbage wrote “In turning from the smaller instruments in frequent use to the larger and more important machines, the economy arising from the increase of velocity becomes more striking.” He said that in 1832.

I mention this because the idea holds true today—the bigness of everything, the immediacy of everything, the pace of everything—the greater the increase from one state to another, the more striking the difference. And that’s exactly why—when we put V-locity 4 trialware into the hands of virtualization wizards to test in their lairs—we want them to really, really put it through the wringer. The heavier the workload, the greater the application demand, the more striking the results.

Recently two virtualization pros got their hands on the V-locity 4 30-day trial, set up rigorous testing, and blogged the entire experience:

VMware technical architect amazed by V-locity 4 results

Another virtualization blogger amazed by V-locity 4

 

Tags: , , ,

Big Data | Hyper-V | IntelliMemory | virtualization | V-Locity | VMware

NEW V-locity 4 VM Accelerator Improves VM Performance by up to 50%

by Jeff Medina 10. December 2012 10:00

Today we are very excited to announce the release of V-locity 4 VM Accelerator. With this latest release, V-locity increases VM and application performance by up to 50% and does so without any additional storage hardware.

Let’s face it - in today’s world of virtual environments, we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files.

The impact of this data explosion on server virtualization can often lead to I/O bottlenecks. This is because a physical server running multiple virtual machines (VMs) must often carry out far more I/O operations than one server running a single workload, and typical virtualization environments emulate I/O devices that run less efficiently than native I/O devices.

In essence, virtualization acts like a funnel, combining and mixing many disparate I/O streams, sending out to the disk what becomes a very random I/O pattern. To make matters worse, the more VMs are added, the more the issue is compounded as more I/O is "randomized." All of this has a very negative affect on storage performance, and renders time-honored techniques such as read-ahead buffers and caching algorithms far less effective than in conventional physical environments.

Storage I/O is the most critical issue in a virtualized environment, and can cause organizations to spend a great deal on storage, purchasing more and more disk spindles, but often using only a fraction of their capacity because of performance issues. The outcome is that, due to issues relating to performance bottlenecks in the storage infrastructure, some applications are deemed unable to be virtualized; however, a properly tuned storage environment might have accommodated those applications. So what’s the alternative? The answer is V-locity 4 VM Accelerator. 

V-locity 4 VM Accelerator provides:

  • Increased application performance up to 50%
  • Up to 50% faster access to frequently accessed files
  • Faster I/O performance without the cost of additional storage hardware
  • Increased VM density per physical server up to 50%
  • Extended hardware lifespan by eliminating unnecessary I/Os
  • Automatic and real-time operation for true “Set It and Forget It®” management 

What makes V-locity 4 so effective is its powerful toolkit of proactive technologies, including IntelliWrite,® V-Aware,® CogniSAN,® InvisiTasking® and the new IntelliMemory® RAM caching technology.

New! IntelliMemory™ Caching Technology
IntelliMemory intelligent caching technology boosts active data, improving I/O response time up to 50% or more while also eliminating unnecessary I/O operations from getting into the network or storage.

Improved! IntelliWrite® Technology
IntelliWrite automatically prevents the operating system from breaking files into pieces and writing those pieces in a performance penalized manner. This proactive approach improves performance up to 50% or more while preventing any negative impact to snapshots replication, data deduplication or thin provisioning growth. As this proactive approach happens at the server level, the network and shared storage simply has less I/O operations to transfer and process.

New! Performance Benefit Analyzer
The Performance Benefits Analyzer helps document the performance benefits of V-locity. The benefit analyzer looks at your current system performance, then compares these results to those after using V-locity to provide a detailed report showing specific improvements and benefits to your system.

V-Aware® Technology
V-Aware detects external resource usage from other virtual machines on the virtual platform and eliminates resource contention that might slow performance.

CogniSAN® Technology
CogniSAN detects external resource usage within a shared storage system, such as a SAN, and allows for transparent optimization by not competing for resources utilized by other VMs over the same storage infrastructure. And it does this without intruding in any way into SAN-layer operations.

InvisiTasking® Technology
InvisiTaksing allows all the V-locity 4 "background" operations within the VM to run with zero resource impact on current production.

Set It and Forget It®
Automatic and real-time operation.

For more details and a FREE trial, visit www.condusiv.com/products/v-locity or call a sales representative at 1-800-829-6468.

The Secret to Optimizing the Big Virtual Data Explosion

by Alex Klein 29. May 2012 09:21
In today’s day and age, many SMBs and enterprise-level businesses are “taking to the skies” with cloud computing. These companies realize that working in the cloud comes with many benefits – including reduced cost of in-house hardware, ease of implementation and seamless scalability. However, as you will read on and discover - performance-impacting file fragmentation and the need for defragmentation still exists and is actually amplified in these environments. Based on these factors, it must now be addressed with a two-fold proactive and preventative solution.

Let’s face it – we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files. Now regardless of destination, most of this data is generated on Windows-based computers, which are known to fragment files. Therefore, when files are manipulated they become fragmented before even reaching the cloud. This occurs because as they are worked with, they get broken up into various pieces and scattered to numerous locations across the hard disk. The result is increased time necessary to access these files and affects system performance.

So how does the above scenario affect the big picture? To understand this, let’s take a closer look at your cloud environment. Your data, and in many cases, much of your infrastructure, has “gone virtual”. Users are able to access applications and work with their data basically anywhere in the world. In such an atmosphere, where the amount of RAM and CPU power available is dramatically increased and files are no longer stored locally, how can the need for defragmentation still be an issue?

Well, what do you think happens when all this fragmented data comes together? The answer is an alarming amount of fragmented Big Data that’s now sitting on the hard drives of your cloud solution. This causes bottlenecks that can severely impact your mission-critical applications due to the large-scale unnecessary I/O cycles needed to process the broken up information.

At the end of the day, traditional approaches to defragmentation just aren’t going to cut it anymore and it’s going take the latest software technology implemented on both sides of the cloud to get these issues resolved. It starts with software, such as Diskeeper 12, installed on every local workstation and server, to prevent fragmentation at its core. Added to this is deploying V-locity software across your virtualized network. This one-two punch defragmentation software solution addresses I/O performance concerns, optimizes productivity and will push cloud computing further than you ever thought possible. In these exciting times of emerging new technologies, Cloud computing can send your business soaring or keep it grounded - the choice is up to you.

Tags:

Big Data | Cloud | Defrag | Diskeeper | virtualization | V-Locity

Webinar: Physical vs. Virtual Bottlenecks: What You Really Need To Know

by Damian 20. February 2012 07:05

Diskeeper Corporation recently delivered a live webinar hosted by Ziff Davis Enterprise. The principle topics covered were:

  • Measuring performance loss in Windows over SAN
  • Identifying client-side performance bottlenecks in private clouds
  • Expanding performance awareness to the client level
  • The greatest and often-overlooked performance issue in a virtual ecosystem

The webinar was co-hosted by:

  • Stephen Deming, Microsoft Partner Solution Advisor
  • Damian Giannunzio, Diskeeper Corporation Field Sales & Application Engineer

Don't miss out on this critical data! If you missed the webinar, you can view the recorded version online here.

Here are some additional, relevant resources:

White Paper: Diskeeper 2011: Improving the Performance of SAN Storage

White Paper: Increasing Efficiency in the IT Environment

White Paper: Inside Diskeeper 2011 with IntelliWrite

White Paper: Running Diskeeper and V-locity on SAN Devices 

Space Reclamation, Above and Below

by Damian 7. November 2011 09:29

Thin provisioning is a fairly hot topic in the storage arena, and with good reason. Many zones within the business and enterprise see massive benefit from the scalability of thin provisioning, and it can be a cost saver besides. However, the principle of thin provisioning suffers some unique maladies at both client and storage levels.

Some storage arrays include a feature permitting thin provisioning for their LUNs. This storage layer thin provisioning occurs below the virtual platform storage stack, and essentially means scalable datastores. Horizontal scaling of data stores adds a new tier of agility to the storage ecosystem that some businesses absolutely require.

LUN thin provisioning shouldn’t be confused with Virtual Disk TP, which works at a file level (not array). Thin provisioned VMs can expand based on pre-determined use cases, adding an extra degree of flexibility to storage density. Intelligently combining TP at multiple tiers yields some pretty neat capacity results.

Datastore thin provisioning has been the source of some concern for storage administrators with regards to recovery from over-provisioning. When virtual disks are deleted or copied away from a datastore, the array itself is not led to understand that those storage blocks are now free. You can see how this can lead to needless storage consumption.

vSphere 5 from VMware introduced a solution for this issue. The new vSphere Storage APIs for Array Integration (VAAI) for TP uses the SCSI UNMAP command to tell the storage array that space previously occupied by a VM can be reclaimed. This addresses one aspect of the issue with thin VM growth.

Files are not simply being written to a virtual disk, they’re also deleted with regularity. Unfortunately, there is no associated feature within virtual platforms or Windows to inform the storage array that blocks can be recovered from a thin disk which should have contracted after deletions. Similar to the issue above, this leads to unnecessary storage waste.

With the release of V-locity 3 in 2011, we introduced a new Automatic Space Reclamation engine. This engine automatically zeroes out “dead” free space within thin virtual disks, without requiring that they be taken offline and with no impact on resource usage. So what does this mean? Thin VMs can be compacted, actually reclaiming the deleted space to the storage array for dynamic use elsewhere. The thin virtual disks themselves are kept slimmed down within datastores, giving more control back to the storage admins governing provisioning.

Space Reclamation with V-locity

You can read more about VAAI for TP in vSphere 5 on the VMware blog here.

Tags: , , ,

virtualization | VMware | Windows 7

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar