Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

VMware Advises on Defrag

by Brian Morin 27. July 2016 01:40

VMware: Defrag or Not?

Dave Lewis sent in a question, “There is such a quandary about disk fragmentation in the VMware environment. One says defrag and another says never. Who's right? This has been a hard subject to track and define.”

I’m going to debunk “defragging” in a minute, but if you read VMware’s own best practice guide on improving performance (found here), page 17 reveals “adding more memory” as the top recommendation while the second most important recommendation is to “defrag all guest machines.”

As much as VMware is aware that fragmentation impacts performance, the real question is how relevant is the task of defragging in today’s environment with sophisticated storage services and new mediums like flash that should never be defragged? First of all, no storage administrator would defrag an entire “live” disk volume without the tedious task of taking it offline due to the impact that change block activity has against services like replication and thin provisioning, which means the problem goes ignored on HDD-based storage systems. Second, organizations who utilize flash can do nothing about the write amplification issues from fragmentation or the resulting slow write performance from a surplus of small, fractured writes.

The beauty behind V-locity® I/O reduction software in a virtual environment is that fragmentation is never an issue because V-locity optimizes the I/O stream at the point of origin to ensure Windows executes writes in the most optimum manner possible. This means large, contiguous, sequential writes to the backend storage for every write and subsequent read. This boosts the performance of both HDD and SSD systems. As much as flash performs well with random reads, it chokes badly on random writes. A typical SSD might spec random reads at 300,000 IOPS but drop to 23,000 IOPS when it comes to writes due to erase cycles and housekeeping that goes into every write. This is why some organizations continue to use spindles for write heavy apps that are sequential in nature.

When most people think of fragmentation, they think in terms of it being a physical layer issue on a mechanical disk. However, in an enterprise environment, Windows is extracted from the physical layer. The real problem is an IOPS inflation issue where the relationship between I/O and data breaks down and there ends up being a surplus of small, tiny I/O that chews up performance no matter what storage media is used on the backend. Instead of utilizing a single I/O to process a 64K file, Windows will break that down into smaller and smaller chunks….with each chunk requiring its own I/O operation to process.

This is bad enough if one virtual server is being taxed by Windows write inefficiencies and sending down twice as many I/O requests as it should to process any given workload…now amplify that same problem happening across all the VMs on the same host and there ends up being a tsunami of unnecessary I/O overwhelming the host and underlying storage subsystem.

As much as virtualization has been great for server efficiency, the one downside is how it adds complexity to the data path. This means I/O characteristics from Windows that are much smaller, more fractured, and more random than they need to be. As a result, performance suffers “death by a thousand cuts” from all this small, tiny I/O that gets subsequently randomized at the hypervisor.

So instead of taking VMware’s recommendation to “defrag,” take our recommendation to never worry about the issue again and put an end to all the small, split I/Os that are hurting performance the most.

Tags: , ,

Defrag | Diskeeper | General | virtualization | V-Locity

Top 5 Questions from V-locity and Diskeeper Customers

by Brian Morin 20. April 2016 05:00

After having chatted with 50+ customers the last three months, I’ve heard the same five questions enough times to turn it into a blog entry, and a lot of it has to do with flash:

 

1. Do Condusiv products still “defrag” like in the old days of Diskeeper?

No. Although users can use Diskeeper to manually defrag if they so choose, the core engines in Diskeeper and V-locity have nothing to do with defragmentation or physical disk management. The patented IntelliWrite® engine inside Diskeeper and V-locity adds a layer of intelligence into the Windows operating system enabling it improve the sequential nature of I/O traffic with large contiguous writes and subsequent reads, which improves performance benefit to both SSDs and HDDs. Since I/O is being streamlined at the point of origin, fragmentation is proactively eliminated from ever becoming an issue in the first place. Although SSDs should never be “defragged,” fragmentation prevention has enormous benefits. This means processing a single I/O to read or write a 64KB file instead of needing several I/O. This alleviates IOPS inflation of workloads to SSDs and cuts down on the number of erase cycles required to write any given file, improving write performance and extending flash reliability.

 

2. Why is it more important to solve Windows write inefficiencies in virtual environments regardless of flash or spindles on the backend? 

Windows write inefficiencies are a problem in physical environments but an even bigger problem in virtual environments due to the fact that multiple instances of the OS are sitting on the same host, creating a bottleneck or choke point that all I/O must funnel through. It’s bad enough if one virtual server is being taxed by Windows write inefficiencies and sending down twice as many I/O requests as it should to process any given workload…now amplify that same problem happening across all the VMs on the same host and there ends up being a tsunami of unnecessary I/O overwhelming the host and underlying storage subsystem. The performance penalty of all of this unnecessary I/O ends up getting further exacerbated by the “I/O Blender” that mixes and randomizes the I/O streams from all the VMs at the point of the hypervisor before sending out to storage a very random pattern, the exact type of pattern that chokes flash performance the most - random writes. V-locity’s IntelliWrite® engine writes files in a contiguous manner which significantly reduces the amount of I/O required to write/read any given file. In addition, IntelliMemory® caches reads from available DRAM. With both engines reducing I/O to storage, that means the usual requirement from storage to process 1GB via 80K I/O drops to 60K I/O at a minimum, but often down to 50K I/O or 40K I/O. This is why the typical V-locity customer sees anywhere from 50-100% more throughput regardless of flash or spindles on the backend because all the optimization is occurring where I/O originates.

VMware’s own “vSphere Monitoring and Performance Guide” calls for “defragmentation of the file system on all guests” as its top performance best practice tip behind adding more memory. When it comes to V-locity, nothing ever has to be “defragged” since fragmentation is proactively eliminated from ever becoming a problem in the first place.

 

3. How Does V-locity help with flash storage? 

One of the most common misnomers is that V-locity is the perfect complement to spindles, but not for flash. That misnomer couldn’t be further from the truth. The fact is, most V-locity customers run V-locity on top of a hybrid (flash & spindles) array or all-flash array. And this is because without V-locity, the underlying storage subsystem has to process at least 35% more I/O than necessary to process any given workload.

As much as virtualization has been great for server efficiency, the one downside is the complexity introduced to the data path, resulting in I/O characteristics that are much smaller, more fractured, and more random than it needs to be. This means flash storage systems are processing workloads 30-50% slower than they should because performance is suffering death-by-a-thousand cuts from all this small, tiny, random I/O that inflates IOPS and chews up throughput. V-locity streamlines I/O to be much more efficient, so twice as much data can be carried with each I/O operation. This significantly improves flash write performance and extends flash reliability with reduced erase cycles. In addition, V-locity establishes a tier-0 caching strategy using idle, available DRAM to cache reads. As little as 3GB of available memory drives an average of 40% reduction in response time (see source). By optimizing writes and reads, that means V-locity drives down the amount of I/O required to process any given workload. Instead of needing 80K I/O to process a GB of data, users typically only need 50K I/O or sometimes even less.

For more on how V-locity complements hybrid storage or all-flash storage, listen to the following OnDemand Webinar I did with a flash storage vendor (Nimble) and a mutual customer who uses hybrid storage + V-locity for a best-of-breed approach for I/O performance.

 

4. Is V-locity’s DRAM caching engine starving my applications of precious memory by caching? 

No. V-locity dynamically uses what Windows sees as available and throttles back if an application requires more memory, ensuring there is never an issue of resource contention or memory starvation. V-locity even keeps a buffer so there is never a latency issue in serving back memory. ESG Labs examined the last 3,500 VMs that tested V-locity and noted a 40% average reduction in response time (see source). This technology has been battle-tested over 5 years across millions of licenses with some of largest OEMs in the industry.

 

5. What is the difference between V-locity and Diskeeper? 

Diskeeper is for physical servers while V-locity is for virtual servers. Diskeeper is priced per OS instance while V-locity is now priced per host, meaning V-locity can be installed on any number of virtual servers on that host. Diskeeper Professional is for physical clients. The main feature difference is whereas Diskeeper keeps physical servers or clients running like new, V-locity accelerates applications by 50-300%. While both Diskeeper and V-locity solve Windows write inefficiencies at the point of origin where I/O is created, V-locity goes a step beyond by caching reads via idle, available DRAM for 50-300% faster application performance. Diskeeper customers who have virtualized can opt to convert their Diskeeper licenses to V-locity licenses to drive value to their virtualized infrastructure.

 

Stay tuned on the next major release of Diskeeper coming soon that may inherit similar functionality from V-locity.

The New Age of Application and Storage Performance Software Is Here

by Alex Klein 5. June 2012 03:50

Condusiv Technologies announced today worldwide availability of the next generation in application and storage performance software – Diskeeper 12. Condusiv has long been a leader in data performance solutions for millions of Windows®-based systems for over 30 years. From boosting application performance to extending hardware life and reducing IT traffic, Condusiv offerings ensure massive benefits on Windows servers, workstations and laptops. The latest release in this category is no exception.

Whether you’re running Windows XP or Windows 7, using SSD hard drives or accessing SANs, traditional approaches to defragmentation just aren’t going to cut it anymore. You have to take a new approach - you have to be proactive and you have to be automatic. Simply put – you need Diskeeper 12

“Condusiv Technologies Corporation, winner six times in a row, is unrelenting in its dominance of this category.” – 2011 Reader’s Choice Award: Best Disk Defragmentation and Drive Monitoring Tool, Redmond Magazine

When files are created, deleted, or modified, they can be broken up and scattered around a volume instead of written in one place. This makes retrieving information like trying to read a book whose pages are out of order, and it can quickly overwork the operating system and storage devices.

The best cure for a problem is to prevent it from occurring in the first place. Diskeeper 12 prevents fragmentation at the Windows level, allowing an application and storage system to write or read at peak performance – with one contiguous access – improving drive performance while extending the drive’s useful life.

All editions of Diskeeper 12 feature the breakthrough IntelliWrite® technology, which prevents the vast majority (up to 85% or more) of fragmentation from ever occurring.

InvisiTasking® technology has been redesigned in Diskeeper 12 to be more assertive in I/O active environments while still maintaining invisible processing. The enhancements will allow Diskeeper to accomplish more defragmentation and resolve it faster (e.g., Instant Defrag™), during typical production workloads.

In addition, Diskeeper 12 adds a host of new features:

-          HyperBoot®New

o   HyperBoot technology has been incorporated into Diskeeper to improve system boot time.

-          CogniSAN™New

o   Technology that detects external resource usage within a shared storage system, such as a SAN, and allows for transparent optimization by never competing for resources utilized by other systems over the same storage infrastructure without intruding in any way into SAN-layer operations. (Server editions only)

-          Disk HealthNew

o   This feature monitors hard disk for S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) data to generate alerts and provides a disk health report, warns of critical problems or an imminent disk failure, generates by email.

-          System MonitoringNew

o   System Monitoring collects system environment activity and provides reporting on key elements. This includes statistical data about system I/O usage, disk state, and Diskeeper effectiveness. The option to send data for analysis at Condusiv Technologies also exists, providing a summary of the statistical data gathered for system performance monitoring purposes.

-          Space Reclamation engineNew

o   Allows the user to manually or automatically zero out unused space from thin provisioned volumes on SAN and disk array storage.

-          Enhanced HyperFast® with TRIM

o   A solid state drive optimizer is a proven optimizing technology for Solid State Drives (SSDs), providing faster performance and longer lifespan.

-          Titan Defrag Engine™ technology

o   The most powerful defrag engine ever built. Designed to meet ever growing storage demands on servers, Titan defragments volumes with massive amounts of data rapidly and thoroughly. Titan is included in the Server edition.

-          Terabyte Volume Engine® technology

o   Rapidly defragments multi-terabyte volumes. This engine, included in the Diskeeper 12 Professional edition, addresses the need to keep these systems running at top speed as the storage capacity of desktop systems increases.

 

Figure 1 A glimpse of the new look and feel in Diskeeper.

Tags: , , , , , ,

Experts discuss built-in defragmentation and the superior merits of Diskeeper optimization

by Dawn Richcreek 27. January 2012 09:18

Recently, there’s been a lot of talk about built-in defragging systems. Is Windows®7 the best option? In the latest issue of Processor Magazine, experts weigh in, making the case for Diskeeper’s optimization in the enterprise. Read the whole article here: http://www.processor.com/articles//P3402/11p02/11p02.pdf?guid

Diskeeper Corporation at Interop New York 2011

by Damian 10. October 2011 02:59

We’ve just returned from the Interop Expo in New York, and what a show! The recent release of V-locity® 3 was extremely well received and interest in its innovations was very high. The Diskeeper Corporation booth was constantly attended by groups of CIOs and storage administrators eager to hear about the benefits of the new virtual platform optimizer.

The lion’s share of energy and buzz at the show surrounded virtualization and cloud computing. Leading vendors across these markets as well as storage, networking, and information security exhibited for large groups of virtual admins and IT executives. Shows like Interop are critical for decision makers to stay apprised of the ever-evolving IT infrastructure landscape, and excellent opportunities to get educated about what is truly needed to grow and maintain a virtual environment that runs on all engines for them.

In addition to being asked by numerous IT analysts about the innovations underlying the incredible advantages of V-locity 3, I was interviewed by TMC (Technology Marketing Corporation) about it.

The need to meet higher Service Level Agreements and reduce Total Cost of Ownership for shared storage have reached a new plateau in virtualized networks and private clouds—what V-locity 3 does best.

If you’re reading this and you were at the event, we’d love to hear about your experiences at Interop this year.

Diskeeper Corporation will be exhibiting at the Gartner Symposium in Orlando, FL next week. If you’re planning on attending this IT Expo, stop by the booth to hear firsthand about how V-locity 3 is improving virtual systems in a whole new way.

Tags: , , ,

Events | virtualization | V-Locity

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar