Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Cost-Effective Solutions for Healthcare IT Deficiencies

by Jim D’Arezzo, CEO 26. August 2019 05:22

Managing healthcare these days is as much about managing data as it is about managing patients themselves.  The tsunami of data washing over the healthcare industry is a result of technological advancements and regulatory requirements coming together in a perfect storm.  But when it comes to saving lives, the healthcare industry cannot allow IT deficiencies to become the problem rather than the solution.

The healthcare system generates about a zettabyte (a trillion gigabytes) of data each year, with sources including electronic health records (EHRs), diagnostics, genetics, wearable devices and much more. While this data can help improve our health, reduce healthcare costs and predict diseases and epidemics, the technology used to process and analyze it is a major factor in its value.

According to a recent report from International Data Corporation, the volume of data processed in the overall healthcare sector is projected to increase at a compound annual growth rate of 36 percent through 2025, significantly faster than in other data-intensive industries such as manufacturing (30 percent projected CAGR), financial services (26 percent) and media and entertainment (25 percent).

Healthcare faces many challenges, but one that cannot be ignored is information technology. Without adequate technology to handle this growing tsunami of often-complex data, medical professionals and scientists can’t do their jobs. And without that, we all pay the price.

Electronic Health Records

Over the last 30 years, healthcare organizations have moved toward digital patient records, with 96 percent of U.S. hospitals and 78 percent of physician’s offices now using EHRs, according to the National Academy of Medicine. A recent report from market research firm Kalorama Information states that the EHR market topped $31.5 billion in 2018, up 6 percent from 2017.

Ten years ago, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act and invested $40 billion in health IT implementation.

The adoption of EHRs is supposed to be a solution, but instead it is straining an overburdened healthcare IT infrastructure. This is largely because of the lack of interoperability among the more than 700 EHR providers. Healthcare organizations, primarily hospitals and physicians’ offices, end up with duplicate EHR data that requires extensive (not to mention non-productive) search and retrieval, which degrades IT system performance.

More Data, More Problems

IT departments are struggling to keep up with demand.  Like the proverbial Dutch boy with his finger in the dyke, it is difficult for IT staff to manage the sheer amount of data, much less the performance demands of users.

We can all relate to this problem.  All of us are users of massive amounts of data.  We also have little patience for slow downloads, uploads, processing or wait times for systems to refresh. IT departments are generally measured on three fundamentals: the efficacy of the applications they provide to end users, uptime of systems and speed (user experience).  The applications are getting more robust, systems are generally more reliable, but speed (performance) is a constant challenge that can get worse by the day.

From an IT investment perspective, improvements in technology have given us much faster networks, much faster processing and huge amounts of storage.  Virtualization of the traditional client-server IT model has provided massive cost savings.  And new hyperconverged systems can improve performance as well in certain instances.  Cloud computing has given us economies of scale. 

But costs will not easily be contained as the mounting waves of data continue to pound against the IT breakwaters.   

Containing IT Costs

Traditional thinking about IT investments goes like this.  We need more compute power; we buy more systems.  We need faster network speeds; we increase network bandwidth and buy the hardware that goes with it.  We need more storage; we buy more hardware.  Costs continue to rise proportionate to the demand for the three fundamentals (applications, uptime and speed).

However, there are solutions that can help contain IT costs.  Data Center Infrastructure Management (DCIM) software has become an effective tool for analyzing and then reducing the overall cost of IT.  In fact, the US government Data Center Optimization Initiative claims to have saved nearly $2 billion since 2016.

Other solutions that don’t require new hardware to improve performance and extend the life of existing systems are also available. 

What is often overlooked is that processing and analyzing data is dependent on the overall system’s input/output (I/O) performance, also known as throughput. Many large organizations performing data analytics require a computer system to access multiple and widespread databases, pulling information together through millions of I/O operations. The system’s analytic capability is dependent on the efficiency of those operations, which in turn is dependent on the efficiency of the computer’s operating environment.

In the Windows environment especially (which runs about 80% of the world’s computers), I/O performance degradation progresses over time. This degradation, which can lower the system’s overall throughput capacity by 50 percent or more, happens in any storage environment. Windows penalizes optimum performance due to server inefficiencies in the handoff of data to storage. This occurs in any data center, whether it is in the cloud or on premises.  And it gets worse in a virtualized computing environment.  In a virtual environment the multitude of systems all sending I/O up and down the stack to and from storage create tiny, fractured, random I/O that results in a “noisy” environment that slows down application performance.  Left untreated, it only worsens with time.

Even experienced IT professionals mistakenly think that new hardware will solve these problems. Since data is so essential to running organizations, they are tempted to throw money at the problem by buying expensive new hardware.  While additional hardware can temporarily mask this degradation, targeted software can improve system throughput by up to 30 to 50 percent or more.  Software like this has the advantage of being non-disruptive (no ripping and replacing hardware), and it can be transparent to end users as it is added in the background.  Thus, a software solution can handle more data by eliminating overhead, increase performance at a much, much lower cost and extend the life of existing systems. 

With the tsunami of data threatening IT, solutions like these should be considered in order to contain healthcare IT costs.


Download V-locity - I/O Reduction Software  

Tags:

Application Performance | EHR

Can you relate? 906 IT Pros Talk About I/O Performance, Application Troubles and More

by Dawn Richcreek 8. January 2019 03:44

We just completed our 5th annual I/O Performance Survey that was conducted with 906 IT Professionals. This is the industry’s largest study of its kind and the research highlights the latest trends in applications that are driving performance demands and how IT Professionals are responding.

I/O Growth Continues to Outpace Expectations

The results show that organizations are struggling to get the full lifecycle from their backend storage as the growth of I/O continues to outpace expectations. The research also shows that IT Pros continue to struggle with user complaints related to sluggish performance from their I/O intensive applications, especially citing MS-SQL applications.

Comprehensive Research Data

The survey consists of 27 detailed questions designed to identify the impact of I/O growth in the modern IT environment. In addition to multiple choice questions, the survey included optional open responses, allowing a respondent to provide commentary on why they selected a particular answer.  All the individual responses have been included to help readers dive deeply on any question. The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

Summary of Key Findings 

1.    I/O Performance is important to IT Pros: The vast majority of IT Pros consider I/O Performance an important part of their job responsibilities. Over a third of these note that growth of I/O from applications is outpacing the useful lifecycle they expect from their underlying storage. 

2.    Application performance is suffering: Half of the IT Pros responsible for I/O performance cite they currently have applications that are tough to support from a systems performance standpoint. The toughest applications stated were: SQL, SAP, Custom/Proprietary apps, Oracle, ERP, Exchange, Database, Data Warehouse, Dynamics, SharePoint, and EMR/EHR. See page 20 for a word cloud graphic. 

3.    SQL is the top troublesome application: The survey confirms that SQL databases are the top business critical application platform and is also the environment that generates the most storage I/O traffic. Nearly a third of the IT Pros responsible for I/O performance state that they are currently experiencing staff/customer complaints due to sluggish applications running on SQL. 

4.    Buying hardware has not solved the performance problems: Nearly three-fourths of IT Pros have added new hardware to improve I/O performance. They have purchased new servers with more cores, new all-flash arrays, new hybrid arrays, server-side SSDs, etc. and yet they still have concerns. In fact, a third have performance concerns that are preventing them from scaling their virtualized infrastructures.  

5.    Still planning to buy hardware: About three-fourths of IT Pros are still planning to continue to invest in hardware to improve I/O performance. 

6.    Lack of awareness: Over half of respondents were unaware of the fact that Windows write inefficiencies generate increasingly smaller writes and reads that dampen performance and that this is a software problem that is not solved by adding new hardware. 

7.    Improve performance via software to avoid expensive hardware purchase: The vast majority of respondents felt it would be urgent/important to improve the performance of their applications via an inexpensive I/O reduction software and avoid an expensive forklift upgrade to their compute, network or storage layers. 

Most Difficult to Support Applications

Below is a word cloud representing hundreds of answers to visually show the application environments IT Pros are having the most trouble to support from a performance standpoint. I think you can see the big ones that pop out!

The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

 

The Simple Software Answer

As much as organizations continue to reactively respond to performance challenges by purchasing expensive new server and storage hardware, our V-locity® I/O reduction software offers a far more efficient path by guaranteeing to solve the toughest application performance challenges on I/O intensive systems like MS-SQL. This means organizations are able to offload 50% of I/O traffic from storage that is nothing but mere noise chewing up IOPS and dampening performance. As soon as we open up 50% of that bandwidth to storage, sluggish performance disappears and now there’s far more storage IOPS to be used for other things.

In just 2 minutes, learn more about how V-locity I/O reduction software eliminates the two big I/O inefficiencies in a virtual environment 2-min Video: Condusiv® I/O Reduction Software Overview

Try it for yourself, download our free 30-day trial – no reboot required

 

This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance

by Dawn Richcreek 6. December 2018 05:04

Lawnswood School is one of the top performing educational institutions in the UK. Their IT environment supports a workload that is split between approximately 1200 students and 200 staff.  With 1400 people and various programs and files to support, they were in search of something to help extend the life of their hardware and increase performance. They turned to Condusiv®’s V-locity® and Diskeeper® to extend the life of their storage hardware and maintain performance for their students and staff.

"Condusiv's V-locity software eliminated almost 50% of all storage I/O requests from having to be dealt with by the disk storage (SAN) layer, and that meant that when replacing the old SAN storage with a 'like-for-like' HP MSA 2040, V-locity gave me the confidence to make the purchase without having to over-spend to over-provision the storage in order to cope with all the excess unnecessary storage I/O traffic that V-locity efficiently eliminates,"said Noel Reynolds, IT Manager at Lawnswood School. Before upgrading his SAN, Noel was able to extend the life of the HP MSA 2000 SAN for 8 years “thanks to Condusiv’s I/O reduction software”.

Knowing how well the IT environment was performing at Lawnswood School, another school reached out to Noel for help, as their IT environment was almost identical, but suffering from slow and sluggish performance. They also had three VMware hosts of the same specification, the older HP MSA 2000 SAN storage and workloads that were pretty much identical. Noel Reynolds noted that: "They were almost a 'clone' school."

He continued: "I did the usual checks to discover why it wasn't working well, such as upgrading the firmware, checking the disks for errors and found nothing wrong other than bad storage performance. After comparing the storage latency, I found that Lawnswood School's disk storage was 20 times faster, even though the hardware, software and workload types were pretty much identical."

"We identified six of the 'most hit' servers and installed Condusiv's software on them. Within 24 hours, we saw a 50% boost in performance. Visibly improved performance had been returned to the users, and this really helped the end user experience.

A great example of a real-world solution." Noel concluded

 

Read the full case study                        Download 30-day trial

Tags:

Defrag | Diskeeper | Disruption, Application Performance, IOPS | EHR | General | SAN | Success Stories | virtualization | V-Locity

Industry-first FAL Remediation and Improved Performance for MEDITECH

by Gary Quan 6. November 2018 03:19

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EHR systems.

In fact, because of this issue, MEDITECH requires all 5x and 6x customers to address this issue and has endorsed both Condusiv® Technologies’ V-locity® and Diskeeper® I/O reduction software for “...their ability to reduce disk fragmentation and eliminate File Attribute List (FAL) saturation. Because of their design and feature set, we have also observed they accelerate application performance in a measurable way,” said Mike Belkner, Associate VP, Technology, MEDITECH.

Some refer to this extreme fragmentation problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has an upper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file. Applications using these files stop in their tracks, not what users want, especially in EHR systems.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes with Windows. So what can be done about it?

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest V-locity® and Diskeeper® product lineup. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reducing the FAL size offline. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the

FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use an exclusive method of defragmentation that helps stem the FAL growth. An industry first!

It will also automatically determine how often to process these files according to their FAL size severity.

Enhanced Free space consolidation engine: One indirect cause of FAL size growth is the extreme free space fragmentation found in these cases. A new Free Space method has been developed to handle these extreme cases.

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For information, case studies, white papers and more, visit  http://www.condusiv.com/solutions/meditech-solutions/

RecentComments

Comment RSS

Month List

Calendar

<<  September 2019  >>
MoTuWeThFrSaSu
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456

View posts in large calendar