Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Condusiv’s V-locity Technology Was Recently Certified as Citrix Ready

by Dawn Richcreek 11. September 2019 09:51

 

We are proud to announce that Condusiv’s V-locity® I/O reduction software has been certified as Citrix Ready®. The Citrix Ready program helps customers identify third-party solutions that enhance virtualization, networking and cloud computing solutions from Citrix Systems, Inc. V-locity, our innovative and dynamic alternative to costly hardware overhauls, has completed a rigorous verification process to ensure compatibility with Citrix solutions, providing confidence in joint solution efficiency and value. The Citrix Ready program makes it easy for customers to identify complementary products and results-driven solutions that can enhance Citrix environments and increase productivity.

 

 

 

Verified Performance Improvements of 50 Percent or More

To obtain the Citrix Ready certification, we ran IOMeter benchmark tests—an industry standard tool for testing I/O performance—on a Windows 10 system powered by Citrix’s XenDesktop virtual desktop access (VDA).  

The IOMeter benchmark utility was set up to run 5 different tests with variations in the following parameters:

 •  Different read/write size packets (512b to 64kb)
 •  Different read/write ratios, i.e. 50% read/50% writes, 75% reads/25% writes
 •  Different mixture of random and sequential I/Os

The tests determined that drastic improvements were made with V-locity enabled versus disabled. With V-locity enabled, we found that performance rates improved around 50% on average. In one test case, IOps (I/Os per second) increased from 2,903 to 5,525, a performance rate improvement of 90%.  

 

 

 

 This chart shows the detailed test results of the 5 test variations:  

 

 

 

We also compared the results of the V-locity Dashboard running the same IOMeter benchmark, with V-locity disabled and then enabled and found some additional improvements.

With V-locity enabled, it was able to eliminate over 8 million I/Os from having to go through the network and storage to get satisfied which immensely increased the I/O capacity of the system.  By knowing the latency times of these ‘eliminated’ I/Os, another improvement to highlight is that it saved more than an hour of storage I/O time.   

 

 

 

Additionally, the workload (amount of data read/written) increased from 169GB to 273GB, meaning 60% more work was being done in the same amount of time.  

 

 

 

 

 

Customers can be confident that V-locity has successfully passed an exhaustive series of tests established by Citrix. The V-locity technology works effectively with Citrix solutions and can provide customers with 50% or more faster performance gain on their heaviest workloads. V-locity allows customers to “set it and forget it,” meaning that once it is installed, systems will instantly improve with little to no maintenance.

Our CEO, Jim D’Arezzo noted, “We are proud to partner with Citrix Systems. It’s important to remember that most I/O performance issues are caused by the operating system, particularly in the Windows environment. When compared to a hardware upgrade, the software solutions Condusiv offers are far more effective—both in terms of cost and result—in increasing overall system performance. We offer customers intelligent solutions that now combine our V-locity with Citrix XenDesk. We can’t wait to continue to work with the trusted partners in the Citrix Ready ecosystem.” 

 

Download free 30-day trial of V-locity

 

Condusiv’s V-locity I/O reduction software has been certified as Citrix Ready

 

 

Caching Is King

by Gary Quan 29. July 2019 06:43

Caching technology has been around for quite some time, so why is Condusiv’s patented IntelliMemory® caching so unique that it outperforms other caching technology and has been licensed by other top OEM PC and Storage vendors? There are a few innovations that make it stand above the others. 

The first innovation is the technology to determine what data to put and keep in cache for the best performance gains on each system. Simple caching methods place recently read-in data into the cache with the hopes that this data will be read again so it can be satisfied from cache. Ok, but far from efficient and optimal. IntelliMemory takes a more heuristic approach using two main factors. One, in the background, it is determining what data is getting read most often to ensure a high cache hit rate and two, using analytics, IntelliMemory knows that certain data patterns will provide better performance gains than others. Combining these two factors, IntelliMemory will use your valuable memory resources to get the optimal caching performance gains for each individual system. 

Another important innovation is the dynamic determination of how much of the system’s valuable memory resource to use. Unlike some caching technologies that require you to allocate a specific amount of memory for caching, IntelliMemory will automatically use just what is available and not being used by other system and user processes.   And if any system or user processes need  the memory, IntelliMemory dynamically gives it back so there is never a memory contention issue.  In fact, IntelliMemory always leaves a buffer of memory available, at least 1.5 GB at a minimum. For example, if there is 4GB available memory in the system, IntelliMemory will use at most 2.5GB of this and will dynamically release it if any other processes need it, then use it again when it becomes available.  That’s one reason we trademarked the phrase Set It and Forget It® 

Developments like these put IntelliMemory caching above all others.  That’s why, when combined with our patented IntelliWrite® technology, we’ve helped millions of customers achieve 30-50% or more performance gains on their Windows systems.  Frankly, some people think it’s magic, but if you’ll pardon my assertion, it’s really just innovative thinking.

V-locity 6.0 Solves Death by a Thousand Cuts in Virtual Environments

by Brian Morin 12. August 2015 08:04

If you haven’t already heard the pre-announcement buzz on V-locity® 6.0 I/O reduction software that made a splash in the press, it’s being released in a couple weeks. To understand why it’s significant and why it’s an unprecedented 3X FASTER than its predecessor is to understand the biggest factor that dampens application performance the most in virtual environments - the problem of increasingly smaller, fractured, and random I/O. That kind of I/O profile is akin to pouring molasses on compute and storage systems. Processing I/O with those characteristics makes systems work much harder than necessary to process any given workload. Virtualized organizations stymied by sluggish performance related to their most I/O intensive applications suffer in large part to a problem that we call “death by a thousand cuts” – I/O that is smaller, more fractured, and more random than it needs to be.

Organizations tend to overlook solving the problem and reactively attempt to mask the problem with more spindles or flash or a forklift storage upgrade. Unfortunately, this approach wastes much of any new investment in flash since optimal performance is being robbed by I/O inefficiencies at the Windows OS layer and also at the hypervisor layer.

V-locity® version 6 has been built from the ground-up to help organizations solve their toughest application performance challenges without new hardware. This is accomplished by optimizing the I/O profile for greater throughput while also targeting the smallest, random I/O that is cached from available DRAM to reduce latency and rid the infrastructure of the kind of I/O that penalizes performance the most.

Although much is made about V-locity’s patented IntelliWrite® engine that increases I/O density and sequentializes writes, special attention was put into V-locity’s DRAM read caching engine (IntelliMemory®) that is now 3X more efficient in version 6 due to changes in the behavioral analytics engine that focuses on "caching effectiveness" instead of "cache hits.”

Leveraging available server-side DRAM for caching is very different than leveraging a dedicated flash resource for cache whether that be PCI-e or SSD. Although DRAM isn’t capacity intensive, it is exponentially faster than a PCI-e or SSD cache sitting below it, which makes it the ideal tier for the first caching tier in the infrastructure. The trick is in knowing how to best use a capacity-limited but blazing fast storage medium.

Commodity algorithms that simply look at characteristics like access frequency might work for  capacity intensive caches, but it doesn’t work for DRAM. V-locity 6.0 determines the best use of DRAM for caching purposes by collecting data on a wide range of data points (storage access, frequency, I/O priority, process priority, types of I/O, nature of I/O (sequential or random), time between I/Os) - then leverages its analytics engine to identify which storage blocks will benefit the most from caching, which also reduces "cache churn" and the repeated recycling of cache blocks. By prioritizing the smallest, random I/O to be served from DRAM, V-locity eliminates the most performance robbing I/O from traversing the infrastructure. Administrators don’t need to be concerned about carving out precious DRAM for caching purposes as V-locity dynamically leverages available DRAM. With a mere 4GB of RAM per VM, we’ve seen gains from 50% to well over 600%, depending on the I/O profile.

With V-locity 5, we examined data from 2576 systems that tested V-locity and shared their before/after data with Condusiv servers. From that raw data, we verified that 43% of all systems experienced greater than 50% reduction in latency on reads due to IntelliMemory. While that’s a significant number in its own right by simply using available DRAM, we can’t wait to see how that number jumps significantly for our customers with V-locity 6.

Internal Iometer tests reveal that the latest version of IntelliMemory in V-locity 6.0 is 3.6X faster when processing 4K blocks and 2.0X faster when processing 64K blocks.

Jim Miller, Senior Analyst, Enterprise Management Associates had this to say, "V-locity version 6.0 makes a very compelling argument for server-side DRAM caching by targeting small, random I/O - the culprit that dampens performance the most. This approach helps organizations improve business productivity by better utilizing the available DRAM they already have. However, considering the price evolution of DRAM, its speed, and proximity to the processor, some organizations may want to add additional memory for caching if they have data sets hungry for otherworldly performance gains."

Finally, one of our customers, Rich Reitenauer, Manager of Infrastructure Management and Support, Alvernia University, had this to say, "Typical IT administrators respond to application performance issues by reactively throwing more expensive server and storage hardware at them, without understanding what the real problem is. Higher education budgets can't afford that kind of brute-force approach. By trying V-locity I/O reduction software first, we were able to double the performance of our LMS app sitting on SQL, stop all complaints about performance, stop the application from timing out on students, and avoid an expensive forklift hardware upgrade."

For more on the I/O Inefficiencies that V-locity solves, read Storage Switzerland’s Briefing on V-locity 6.0 ->

$2 Million Cancelled

by Brian Morin 22. July 2014 08:52

CHRISTUS Health cancelled a $2 Million order.

Just before they pulled the trigger on a $2 Million storage purchase to improve the performance of their electronic health records application (MEDITECH®), they evaluated V-locity® I/O reduction software.

We actually heard the story first hand from the NetApp® reseller in the deal at a UBM Xchange conference. He thought he had closed the $2 Million deal only to find out that CHRISTUS was doing some testing with V-locity. After getting the news that the storage order would not be placed, he met us at Xchange to find out more about V-locity since "this V-locity stuff is for real."

After an initial conversation with anyone about V-locity, the first response is generally the same – skepticism. Can software alone really accelerate the applications in my virtual environment? Since we are conditioned to think only new hardware upgrades can solve performance bottlenecks, organizations end up with spiraling data center costs without any other option except to throw more hardware at the problem.

CHRISTUS Health, like many others, approached us with the same skepticism. But after virtualizing 70+ servers for their EHR application, they noticed a severe performance hit from the “I/O blender” effect. They needed a solution to solve the problem, not just more hardware to medicate the problem on the backend.

Since V-locity comes with an embedded performance benchmark that provides the I/O profile of any VM workload, it makes it easy to see a before/after comparison in real-world environments.

After evaluation, not only did CHRISTUS realize they were able to double their medical records performance, but after trying V-locity on their batch billing job, they dropped a painful 20 hour job down to 12 hours.

In addition to performance gains, V-locity also provides a special benefit to MEDITECH users by eliminating excessive file fragmentation that can cause the File Attribute List (FAL) to reach its size limit and degrade performance further or even threaten availability.

Tom Swearingen, the manager of Infrastructure Services at CHRISTUS Health said it best. "We are constantly scrutinizing our budget, so anything that helps us avoid buying more storage hardware for performance or host-related infrastructure is a huge benefit."

Read the full case study – CHRISTUS Health Doubles Electronic Health Record Performance with V-locity I/O Reduction Software

The Real Meaning of Disruption

by Jerry Baldwin 25. March 2014 07:26

Disruption is a popular word these days. Is it the replacement for innovation, which we’ve overused into pointlessness over the past ten years? Maybe, but disruption means more: it carries the weight of shaking up the status quo—of not only creating something new—but creating something that shifts models and opens our eyes to possibility.

We talk about disruption like it’s a new thing, cooked-up by a team of marketers in a conference room somewhere. But its roots lie in a theory introduced by an Austrian economist in 1942. Joseph Schumpeter captured the world’s attention in only six pages, where he clarified the free market’s muddled way of delivering progress. He called this theory Creative Destruction.

I don’t have a PhD in economics so I’ll dodge the minutiae, but the overall meaning of the theory is what resonates—as it should to anyone engaged in business today. “Creative destruction is the same process of industrial mutation that revolutionizes the economic structure from within, incessantly destroying the old one and incessantly creating a new one.”

Simply put, to continually move forward we must be willing to embrace the destruction of what came before.

Economists often use the Montgomery Ward example. MW started as a mail-order company in the nineteenth century. With no cars or trucks in those days, and most Americans living in small rural pockets, it was too costly to ship products to widely-dispersed local stores. That model would result in a high cost to the consumer, making goods too expensive for the average buyer. So instead MW turned its enormous Chicago warehouse (complete with a railroad running through it) into its hub, and used the already well-established US mail to deliver products directly to customers at a low cost.

And a successful model it was. With a high-volume of sales, MW could charge lower prices. By the 1890s Montgomery Ward was the world’s largest retailer. But all that came to an end. Why? Because over time Americans were moving to urban centers and could afford a higher standard of living.

Robert Wood was an MW exec, and may well be the first adopter of Big Data on record. He analyzed data, studied the census, and saw the shift in American lifestyle. He presented his findings to leadership, suggesting that selling goods through a chain of urban department stores would replace the mail-order model as the most profitable path for retail.

So MW did what any unmovable enterprise would do. They fired him.

Meanwhile James Cash Penney recognized the same trends as Robert Wood, and it wasn’t long before J.C. Penney stores put a serious dent in MW’s profits. The mail-order giant was late to the party and couldn’t change direction fast enough. And Robert Wood? He went to work for Sears, who also took a big bite out of MW’s market share.

Remind you of anything? Netflix and Blockbuster. Blockbuster was the established enterprise, staring streaming revenue in the face, yet unable to let go of profits from the rental market. Netflix is the newcomer—the creative destructor—free from the ball and chain of a dying business model, free to focus 100% on new streaming revenue. And we all know the end of that story.

We also know that business is anything but stagnant, there are waves and cycles, and the same is true of companies. It’s very difficult (impossible?) for an established enterprise to turn around, to be truly disruptive, and to compete with newcomers. But if you’re an organization that can stomach the destruction of what came before to create new growth and opportunity, you might stand a chance.

Condusiv is unique. We’re a software company with a 30-year history as an established player in file system management and caching technologies. But in the spirit of disruption—of creative destruction—we’ve shifted all our focus and resources to V-locity, our flagship product that in itself signifies disruption in how organizations run their data centers. A 100% software approach to solving today’s application performance issues at the root on the VM or OS layer, without requiring any additional hardware.

When you embrace creative destruction you must ceaselessly devalue existing wealth in order to clear the ground for the creation of new wealth.

It’s true, our discussions may be very different from those had in other conference rooms. But that’s what disruption should be about—that’s how Condusiv can deliver performance in a software solution at 1/10th the cost of the hardware alternative. While others go for margin erosion—trying to win business by saving customers 20 cents on every dollar spent, we help customers save 90 cents on every dollar.

As a channel partner, we allow you to fulfill your mission to your customers with added value, while also protecting your bottom line with a generous software margin that grows your profit over razor thin commodity hardware margins. You win. Your customer wins. A new market is emerging where hardware becomes the 2nd option for improved application performance.

Welcome to our conference room.

RecentComments

Comment RSS

Month List

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar