Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Cost-Effective Solutions for Healthcare IT Deficiencies

by Jim D’Arezzo, CEO 26. August 2019 05:22

Managing healthcare these days is as much about managing data as it is about managing patients themselves.  The tsunami of data washing over the healthcare industry is a result of technological advancements and regulatory requirements coming together in a perfect storm.  But when it comes to saving lives, the healthcare industry cannot allow IT deficiencies to become the problem rather than the solution.

The healthcare system generates about a zettabyte (a trillion gigabytes) of data each year, with sources including electronic health records (EHRs), diagnostics, genetics, wearable devices and much more. While this data can help improve our health, reduce healthcare costs and predict diseases and epidemics, the technology used to process and analyze it is a major factor in its value.

According to a recent report from International Data Corporation, the volume of data processed in the overall healthcare sector is projected to increase at a compound annual growth rate of 36 percent through 2025, significantly faster than in other data-intensive industries such as manufacturing (30 percent projected CAGR), financial services (26 percent) and media and entertainment (25 percent).

Healthcare faces many challenges, but one that cannot be ignored is information technology. Without adequate technology to handle this growing tsunami of often-complex data, medical professionals and scientists can’t do their jobs. And without that, we all pay the price.

Electronic Health Records

Over the last 30 years, healthcare organizations have moved toward digital patient records, with 96 percent of U.S. hospitals and 78 percent of physician’s offices now using EHRs, according to the National Academy of Medicine. A recent report from market research firm Kalorama Information states that the EHR market topped $31.5 billion in 2018, up 6 percent from 2017.

Ten years ago, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act and invested $40 billion in health IT implementation.

The adoption of EHRs is supposed to be a solution, but instead it is straining an overburdened healthcare IT infrastructure. This is largely because of the lack of interoperability among the more than 700 EHR providers. Healthcare organizations, primarily hospitals and physicians’ offices, end up with duplicate EHR data that requires extensive (not to mention non-productive) search and retrieval, which degrades IT system performance.

More Data, More Problems

IT departments are struggling to keep up with demand.  Like the proverbial Dutch boy with his finger in the dyke, it is difficult for IT staff to manage the sheer amount of data, much less the performance demands of users.

We can all relate to this problem.  All of us are users of massive amounts of data.  We also have little patience for slow downloads, uploads, processing or wait times for systems to refresh. IT departments are generally measured on three fundamentals: the efficacy of the applications they provide to end users, uptime of systems and speed (user experience).  The applications are getting more robust, systems are generally more reliable, but speed (performance) is a constant challenge that can get worse by the day.

From an IT investment perspective, improvements in technology have given us much faster networks, much faster processing and huge amounts of storage.  Virtualization of the traditional client-server IT model has provided massive cost savings.  And new hyperconverged systems can improve performance as well in certain instances.  Cloud computing has given us economies of scale. 

But costs will not easily be contained as the mounting waves of data continue to pound against the IT breakwaters.   

Containing IT Costs

Traditional thinking about IT investments goes like this.  We need more compute power; we buy more systems.  We need faster network speeds; we increase network bandwidth and buy the hardware that goes with it.  We need more storage; we buy more hardware.  Costs continue to rise proportionate to the demand for the three fundamentals (applications, uptime and speed).

However, there are solutions that can help contain IT costs.  Data Center Infrastructure Management (DCIM) software has become an effective tool for analyzing and then reducing the overall cost of IT.  In fact, the US government Data Center Optimization Initiative claims to have saved nearly $2 billion since 2016.

Other solutions that don’t require new hardware to improve performance and extend the life of existing systems are also available. 

What is often overlooked is that processing and analyzing data is dependent on the overall system’s input/output (I/O) performance, also known as throughput. Many large organizations performing data analytics require a computer system to access multiple and widespread databases, pulling information together through millions of I/O operations. The system’s analytic capability is dependent on the efficiency of those operations, which in turn is dependent on the efficiency of the computer’s operating environment.

In the Windows environment especially (which runs about 80% of the world’s computers), I/O performance degradation progresses over time. This degradation, which can lower the system’s overall throughput capacity by 50 percent or more, happens in any storage environment. Windows penalizes optimum performance due to server inefficiencies in the handoff of data to storage. This occurs in any data center, whether it is in the cloud or on premises.  And it gets worse in a virtualized computing environment.  In a virtual environment the multitude of systems all sending I/O up and down the stack to and from storage create tiny, fractured, random I/O that results in a “noisy” environment that slows down application performance.  Left untreated, it only worsens with time.

Even experienced IT professionals mistakenly think that new hardware will solve these problems. Since data is so essential to running organizations, they are tempted to throw money at the problem by buying expensive new hardware.  While additional hardware can temporarily mask this degradation, targeted software can improve system throughput by up to 30 to 50 percent or more.  Software like this has the advantage of being non-disruptive (no ripping and replacing hardware), and it can be transparent to end users as it is added in the background.  Thus, a software solution can handle more data by eliminating overhead, increase performance at a much, much lower cost and extend the life of existing systems. 

With the tsunami of data threatening IT, solutions like these should be considered in order to contain healthcare IT costs.


Download V-locity - I/O Reduction Software  

Tags:

Application Performance | EHR

Do I Really Need V-locity on All VMs?

by Rick Cadruvi, Chief Architect 15. August 2019 04:12

V-locity® customers may wonder, “How many VMs do I need to install V-locity on for optimal results? What kind of benefits will I see with V-locity on one or two VMs versus all the VMs on a host?” 

As a refresher…

It is true that V-locity will likely provide significant benefit on that one VM.  It may even be extraordinary.  But loading V-locity on just one VM on a host with sometimes dozens of VMs won’t give you the biggest bang for your buck. V-locity includes many technologies that address storage performance issues in an extremely intelligent manner.  Part of the underlying design is to learn about the specific loads your system has and intelligently adapt to each specific environment presented to it.   That’s why we created V-locity especially for virtual environments in the first place. 

As you have experienced, the beauty of V-locity is its ability to deal with the I/O Blender Effect.  When there are multiple VMs on a host, or multiple hosts with VMs that use the same back-end storage system (e.g., a SAN) a “blender” effect occurs when all these VMs are sending I/O requests up and down the stack.  As you can guess, it can create huge performance bottlenecks. In fact, perhaps the most significant issue that virtualized environments face is the fact that there are MANY performance chokepoints in the ecosystem, especially the storage subsystem.  These chokepoints are robbing 30-50% of your throughput.  This is the dark side of virtualized systems. 

Look at it this way.  VM “A” may have different resource requirements than VM “B” and so on.  Besides performing different tasks with different workloads, they may have different peak usage periods.  What happens when those peaks overlap?  Worse yet, what happens if several of your VMs have very similar resource requirements and workloads that constantly overlap? 

 

The answer is that the I/O Blender Effect takes over and now VM “A” is competing directly with VM “B” and VM “C” and so on.  The blender pours all those resource desires into a funnel, creating bottlenecks with unpredictable performance results.  What is predictable is that performance will suffer, and likely a LOT.

V-locity was designed from the ground up to intelligently deal with these core issues.  The guiding question in front of us as it was being designed and engineered, was: 

Given your workload and resources, how can V-locity help you overcome the I/O Blender Effect? 

By making sure that V-locity will adapt to your specific workload and having studied what kinds of I/Os amplify the I/O Blender Effect, we were able to add intelligence to specifically go after those I/Os.  We take a global view.  We aren’t limited to a specific application or workload.  While we do have technologies that shine under certain workloads, such as transactional SQL applications, our goal is to optimize the entire ecosystem.  That’s the only way to overcome the I/O Blender Effect.

So, while we can indeed give you great gains on a single VM, V-locity truly gets to shine and show off its purpose when it can intelligently deal with the chokepoints that create the I/O Blender Effect.  That means you should add V-locity to ALL your VMs.  With our no-reboot installation and a V-locity Management Console, it’s fast and easy to cover and manage your environment.

If you have V-locity on all the VMs on your host(s), let us know how it is going! If you don’t yet, contact your account manager who can get you set up!

For an in-depth refresher,  watch our 10-min whiteboard video

 

Improve Performance Without Getting Stuck with Unnecessary and Expensive Hardware

by Marissa Newman 14. August 2019 04:15

We recently completed a case study with a top construction company who deployed V-locity® I/O reduction software to improve speed on Citrix and SQL applications and avoided unnecessary hardware costs.

Performance on Teichert’s Citrix and SQL servers that were running critical applications was beginning to suffer due to the growth of data, increased number of users, and infrastructure virtualization. When user complaints about the slowness of the live-order entry system, queries taking too long, and poor system performance in general were at a high, Steve Lomax, Teichert’s Senior Windows Systems Administrator, began to look at various solutions to optimize their infrastructure.

Having already been a Diskeeper customer, the company decided to evaluate Condusiv’s V-locity® I/O reduction software and were pleased to find that they no longer needed to implement any of the other solutions they were looking at to optimize performance. Steve explained, “We were surprised to learn that Condusiv had advanced their software to such a degree that they were now proactively optimizing writes in-line, cutting out a ton of write I/O that was previously hammering our SAN. We upgraded from a NetApp tiered storage solution…to a hybrid-flash Tintri T850 and experienced the same results from V-locity regardless of the brand or type of storage we had underlying our data center.”

The company saw major improvement across the board and particularly on their Citrix and SQL servers. On their 30 busiest servers, V-locity had prevented 41% of write I/O traffic from occurring and 50% of read I/O traffic from storage due to DRAM read caching.

Deploying V-locity to their virtual infrastructure eliminated the need to spend money on additional hardware and other 3rd party performance solutions. Steve said, “We were looking at other solutions that could give us a performance boost…[but] were able to skip all that because V-locity solved it with a simple install right into the VM and that’s it. You’re done. I would recommend V-locity to anyone that doesn’t want to be stuck with unnecessary hardware costs and needs a solution to add an extra layer of enhanced performance.”

Steve concluded by describing the positive impacts brought on by V-locity’s performance gains. “We’ve seen at least a 50-60% drop in performance-related issues…General frustration is gone, users are more productive and efficient, customers don’t have to wait long periods of time to process orders, and the IT department is freed up to focus on more important initiatives relating to the company’s mission and core values.”

 

Read the full case study

 

Try V-locity FREE for yourself – no reboot is needed

Overcoming the I/O Blender Effect with V-locity

by Rick Cadruvi, Chief Architect 5. August 2019 05:23

You’ve decided that you want to try out V-locity® software – kick the tires so to speak.  You’ll just load it on one of your Virtual Machines and see how it goes.  What kind of benefit will you see?

It is true that V-locity will likely provide significant benefit on that one VM.  It may even be extraordinary.  But loading V-locity on just one VM on a host with sometimes dozens of VMs won’t give you the biggest bang for your buck. V-locity includes many technologies that address storage performance issues in an extremely intelligent manner.  Part of the underlying design is to learn about the specific loads your system has and intelligently adapt to each specific environment presented to it.   That’s why we created a product especially for Virtual environments in the first place. 

The beauty of V-locity is its ability to deal with something called the I/O Blender Effect.  This is the dark side of virtualized systems.  When there are multiple VMs on a host, or multiple hosts with VMs that use the same back-end storage system (e.g., a SAN) a “blender” effect occurs when all these VMs are sending I/O requests up and down the stack.  As you can guess, it can create huge performance bottlenecks. In fact, perhaps the most significant issue that virtualized environments face is the fact that there are MANY performance chokepoints in the ecosystem, especially the storage subsystem.  These chokepoints are robbing 30-50% of your throughput.  That’s what V-locity can recover.

Look at it this way.  VM “A” may have different resource requirements than VM “B” and so on.  Besides performing different tasks with different workloads, they may have different peak usage periods.  What happens when those peaks overlap?  Worse yet, what happens if several of your VMs have very similar resource requirements and workloads that constantly overlap? 

 

The answer is that the I/O Blender Effect takes over and now VM “A” is competing directly with VM “B” and VM “C” and so on.  The blender pours all those resource desires into a funnel, creating bottlenecks with unpredictable performance results.  What is predictable is that performance will suffer, and likely a LOT.

Enter V-locity.  V-locity was designed from the ground up to intelligently deal with these core issues.  The guiding question in front of us as it was being designed and engineered, was:

Given your workload and resources, how can V-locity help you overcome the I/O Blender Effect?

By making sure that V-locity will adapt to your specific workload and having studied what kinds of I/Os amplify the I/O Blender Effect, we were able to add intelligence to specifically go after those I/Os.  We take a global view.  We aren’t limited to a specific application or workload.  While we do have technologies that shine under certain workloads, such as transactional SQL applications, our goal is to optimize the entire ecosystem.  That’s the only way to overcome the I/O Blender Effect.

So, while we can indeed give you great gains on a single VM, V-locity truly gets to shine and show off its purpose when it can intelligently deal with the chokepoints that create the I/O Blender Effect.  That means you should add V-locity to ALL your VMs.  With our no-reboot installation and a V-locity Management Console, it’s fast and easy to cover and manage your environment.

And yes, this same I/O Blender effect can occur in your physical environment with multiple physical systems all accessing different LUNs on the same SAN. Our Diskeeper® software is the answer here.

Go ahead and try V-locity on the VMs that are in the most competition for resources and you’ll be amazed at the benefits.  The chokepoints aren’t obvious or right in front of your face, but they are real and V-locity is the answer.  After that, just add V-locity to all your VMs, then sit back and see how smart you were to so easily improve throughput across your eco-system.

Video: Condusiv I/O Reduction Software Overview

Download a 30-day Free Trial

 

RecentComments

Comment RSS

Month List

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar