Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Case Study: Non-Profit Eliminates Frustrating Help Desk calls, Boosts Performance and Extends Useful Hardware Lifecycle

by Marissa Newman 9. September 2019 11:47

When PathPoint was faced with user complaints and productivity issues related to slow performance, the non-profit organization turned to Condusiv’s I/O reduction software to not only optimize their physical and virtual infrastructure but to extend their hardware lifecycles, as well. 

As technology became more relevant to PathPoint’s growing organization and mission of providing people with disabilities and young adults the skills and resources to set them up for success, the IT team had to find a solution to make the IT infrastructure as efficient as possible. That’s when the organization looked into Diskeeper® as a solution for their physical servers and desktops.

“Now when we are configuring our workstations and laptops, the first thing we do is install Diskeeper. We have several lab computers that we don’t put the software on and the difference is obvious in day-to-day functionality. Diskeeper has essentially eliminated all helpdesk calls related to sluggish performance.” reported Curt Dennett, PathPoint’s VP of Technology and Infrastructure.

Curt also found that workstations with Diskeeper installed have a 5-year lifecycle versus the lab computers without Diskeeper that only last 3 years and he found similar results on his physical servers that are running full production workloads. Curt observed, “We don’t need to re-format machines running Diskeeper nearly as often. As a result, we gained back valuable time for other important initiatives while securing peak performance and longevity out of our physical hardware assets. With limited budgets, that has truly put us at ease.”

When PathPoint expanded into the virtual realm, Curt looked at V-locity® for their VM’s and, after reviewing the benefits, brought the software into the rest of their environment. The organization found that with the powerful capabilities of Diskeeper and V-locity, they were able to offload 47% of I/O traffic from storage, resulting in a much faster experience for their users.

The use of V-locity and Diskeeper is now the standard for PathPoint. Curt concluded, “The numbers are impressive but what’s more for me, is the gut feeling and the experience of knowing that the machines are actually performing efficiently. I wouldn’t run any environment without these tools.”

 

Read the full case study

 

Try V-locity FREE for yourself – no reboot is needed

Caching Is King

by Gary Quan 29. July 2019 06:43

Caching technology has been around for quite some time, so why is Condusiv’s patented IntelliMemory® caching so unique that it outperforms other caching technology and has been licensed by other top OEM PC and Storage vendors? There are a few innovations that make it stand above the others. 

The first innovation is the technology to determine what data to put and keep in cache for the best performance gains on each system. Simple caching methods place recently read-in data into the cache with the hopes that this data will be read again so it can be satisfied from cache. Ok, but far from efficient and optimal. IntelliMemory takes a more heuristic approach using two main factors. One, in the background, it is determining what data is getting read most often to ensure a high cache hit rate and two, using analytics, IntelliMemory knows that certain data patterns will provide better performance gains than others. Combining these two factors, IntelliMemory will use your valuable memory resources to get the optimal caching performance gains for each individual system. 

Another important innovation is the dynamic determination of how much of the system’s valuable memory resource to use. Unlike some caching technologies that require you to allocate a specific amount of memory for caching, IntelliMemory will automatically use just what is available and not being used by other system and user processes.   And if any system or user processes need  the memory, IntelliMemory dynamically gives it back so there is never a memory contention issue.  In fact, IntelliMemory always leaves a buffer of memory available, at least 1.5 GB at a minimum. For example, if there is 4GB available memory in the system, IntelliMemory will use at most 2.5GB of this and will dynamically release it if any other processes need it, then use it again when it becomes available.  That’s one reason we trademarked the phrase Set It and Forget It® 

Developments like these put IntelliMemory caching above all others.  That’s why, when combined with our patented IntelliWrite® technology, we’ve helped millions of customers achieve 30-50% or more performance gains on their Windows systems.  Frankly, some people think it’s magic, but if you’ll pardon my assertion, it’s really just innovative thinking.

V-locity I/O Reduction Software Put to the Test on 3500 VMs

by Brian Morin 17. March 2016 04:18

As much as we commonly mention the expected performance gains from V-locity® I/O reduction software is 50-300% faster application performance, that 50-300% can represent quite a range - a correlation relative to how badly systems are taxed by I/O inefficiencies in virtual environments that are subsequently streamlined by V-locity. While some workloads experience 300% throughput gains, other workloads in the same environment see 50% gains.

While there is already plenty of V-locity performance validation represented in 15 published case studies that all reveal a doubling in VM performance, we wanted to get an idea of what V-locity delivers on average across a large scale. So we decided to take off our “rose-colored” glasses of what we think our software does and handed over the last 3,450 VMs that tested V-locity to ESG Labs, who examined the raw data from over 100 sites and PUBLISHED THE FINDINGS IN THIS REPORT.

Here are the key findings:

·         Reduced read I/O to storage. ESG Lab calculated 55% of systems saw a reduction of 50% in the number of read I/Os that get serviced by the underlying storage

·         Reduced write I/O to storage. As a result of I/O density increases, ESG Lab witnessed a 33% reduction in write I/Os across 27% of the systems. In addition, 14% of systems experienced a 50% or greater reduction in write I/O from VM (virtual machine) to storage.

·         Increased throughput. ESG Lab witnessed throughput performance improvements of 50% or more for 43% of systems, while 29% of systems experienced a 100% increase in throughput, and as much as 300% increased levels of throughput for 8% of systems.

·         Decreased I/O response time. ESG Lab calculated that systems with 3GB of available DRAM achieved a 40% reduction in response time across all I/O operations.

·         Increased IOPS. ESG Lab found that 25% of systems saw IOPS increase by 50% or more.

 

The key take-away from this analysis is demonstrating the sizeable performance loss virtualized organizations suffer in regard to I/O inefficiencies that can be easily solved by V-locity streamlining I/O at the guest level on Windows VMs. Whereas most organizations typically respond to I/O performance issues by taking the brute-force approach of throwing more expensive hardware at the problem, V-locity demonstrates the efficiencies organizations achieve at a fraction of the cost of new hardware by simply solving the root-cause problem first.

Tags: , , , , , , , ,

SAN | virtualization | V-Locity

Largest-Ever I/O Performance Study

by Brian Morin 28. January 2016 09:10

Over the last year, 2,654 IT Professionals took our industry-first I/O Performance Survey, which makes it the largest I/O performance survey of its kind. The key findings from the survey reveal an I/O performance struggle for virtualized organizations as 77% of all respondents indicated I/O performance issues after virtualizing. The full 17 page report is available for download at http://learn.condusiv.com/2015survey.html.

Key findings in the survey include:

- More than 1/3rd of respondents (36%) are currently experiencing staff or customer complaints regarding sluggish applications running on MS SQL or Oracle

- Nearly 1/3rd of respondents (28%) are so limited by I/O bottlenecks that they have reached an "I/O ceiling" and are unable to scale their virtualized infrastructure

- To improve I/O performance since virtualizing, 51% purchased a new SAN, 8% purchased PCIe flash cards, 17% purchased server-side SSDs, 27% purchased storage-side SSDs, 16% purchased more SAS spindles,       6% purchased a hyper-converged appliance

- In the coming year, to remediate I/O bottlenecks, 25% plan to purchase a new SAN, 8% plan to purchase a hyper-converged appliance, 10% will purchase SAS spindles, 16% will purchases server-side SSDs, 8% will   purchase PCIe flash cards, 27% will purchase storage-side SSDs, 35% will purchase nothing in the coming year

- Over 1,000 applications were named when asked to identify the top two most challenging applications to support from a systems performance standpoint. Everything in the top 10 was an application running on top of   a database

- 71% agree that improving the performance of one or two applications via inexpensive I/O reduction software to avoid a forklift upgrade is either important or urgent for their environment

As much as virtualization has provided cost-savings and improved efficiency at the server-level, those cost savings are typically traded-off for backend storage infrastructure upgrades to handle the new IOPS requirements from virtualized workloads. This is due to I/O characteristics that are much smaller, more fractured, and more random than they need to be.  The added complexity that virtualization introduces to the data path via the “I/O blender” effect that randomizes I/O from disparate VMs, and the amplification of Windows write inefficiencies at the logical disk layer erodes the relationship between I/O and data, generating a flood of small, fractured I/O. This compounding effect between the I/O blender and Windows write inefficiencies creates “death by a thousand cuts” regarding system performance, creating the perfect trifecta for poor performance – small, fractured, random I/O.

Since native virtualization out-of-the box does nothing to solve this problem, organizations are left with little choice but accept the loss of throughput from these inefficiencies and overbuy and overprovision for performance from an IOPS standpoint since they are twice as IOPS dependent than they actually need to be…except for Condusiv customers who are using V-locity® I/O reduction software to see 50-300% faster application performance on the hardware they already have by solving this root cause problem at the VM OS-layer.

Note - Respondents from companies with employee sizes under 100 employees were excluded from the results, so results would not be skewed by the low end of the SMB market.

3 Min Video on SAN Misconceptions Regarding Fragmentation

by Brian Morin 23. June 2015 08:56

In just 3 minutes, George Crump, Sr Analyst at Storage Switzerland, explains the real problem around fragmentation and SAN storage, debunks misconceptions, and describes what organizations are doing about it. It should be noted, that even though he is speaking about the Windows OS on physical servers, the problem is the same for virtual servers connected to SAN storage. Watch ->

In conversations we have with SAN storage administrators and even storage vendors, it usually takes some time for someone to realize that performance-robbing Windows fragmentation does occur, but the problem is not what you think. It has nothing to do with the physical layer under SAN management or latency from physical disk head movement. 

When people think of fragmentation, they typically think in the context of physical blocks on a mechanical disk. However, in a SAN environment, the Windows OS is abstracted from the physical layer. The Windows OS manages the logical disk software layer and the SAN manages how the data is physically written to disk or solid-state.

What this means is that the SAN device has no control or influence on how data is written to the logical disk. In the video, George Crump describes how fragmentation is inherent to the fabric of Windows and what actually happens when a file is written to the logical disk in a fragmented manner – I/Os become fractured and it takes more I/O than necessary to process any given file. As a result, SAN systems are overloaded with a small, fractured, random I/O, which dampens overall performance. The I/O overhead from a fragmented logical disk impacts SAN storage populated with flash equally as much as a system populated with disk.

The video doesn’t have time to go into why this actually happens, so here is a brief explanation: 

Since the Windows OS takes a one-size-fits-all approach to all environments, the OS is not aware of file sizes. What that means is the OS does not look for the proper size allocation within the logical disk when writing or extending a file. It simply looks for the next available allocation. If the available address is not large enough, the OS splits the file and looks for the next available address, fills, and splits again until the whole file is written. The resulting problem in a SAN environment with flash or disk is that a dedicated I/O operation is required to process every piece of the file. In George’s example, it could take 25 I/O operations to process a file that could have otherwise been processed with a single I/O. We see customer examples of severe fragmentation where a single file has been fractured into thousands of pieces at the logical layer. It’s akin to pouring molasses on a SAN system.

Since a defragmentation process only deals with the problem after-the-fact and is not an option on a modern, production SAN without taking it offline, Condusiv developed its patented IntelliWrite® technology within both Diskeeper® and V-locity® that prevents I/Os from fracturing in the first place. IntelliWrite provides intelligence to the Windows OS to help it find the proper size allocation within the logical disk instead of the next available allocation. This enables files to be written (and read) in a more contiguous and sequential manner, so only minimum I/O is required of any workload from server to storage. This increases throughput on existing systems so organizations can get peak performance from the SSDs or mechanical disks they already have, and avoid overspending on expensive hardware to combat performance problems that can be so easily solved.

Tags: , , , , ,

RecentComments

Comment RSS

Month List

Calendar

<<  December 2019  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar