Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Financial Sector Battered by Rising Compliance Costs

by Dawn Richcreek 15. August 2018 08:39

Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry.

All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture survey of compliance officers in 13 countries said they expected 10% to 20% increases, and nearly one in five are expecting increases of more than 20%.

Much of this is driven by international banking regulations. At the beginning of this year, the Common Reporting Standard went into effect. An anti-tax-evasion measure signed by 142 countries, the CRS requires financial institutions to provide detailed account information to the home governments of virtually every sizeable depositor.

Just to keep things exciting, the U.S. government hasn’t signed on to CRS; instead we require banks doing business with Americans to comply with the Foreign Account Tax Compliance Act of 2010. Which requires—surprise, surprise—pretty much the same thing as CRS, but reported differently.

And these are just two examples of the compliance burden the financial sector must deal with. Efficiently, and within a budget. In a recent interview by ValueWalk entitled “Compliance Costs Soaring for Financial Institutions,” Condusiv® CEO Jim D’Arezzo said, “Financial firms must find a path to more sustainable compliance costs.”

Speaking to the site’s audience (ValueWalk is a site focused on hedge funds, large asset managers, and value investing) D’Arezzo noted that finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT, more than government, healthcare, retail, or anybody else. It’s also an outlier in terms of IT staff load; on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. (Government averages 37.8 users per IT staff employee.)

To ease these difficulties, D’Arezzo recommends that the financial industry consider advanced technologies that provide cost-effective ways to enhance overall system performance. “The only way financial services companies will be able to meet the compliance demands being placed on them, and at the same time meet their efficiency and profitability targets, will be to improve the efficiency of their existing capacity—especially as regards I/O reduction.”

At Condusiv, that’s our business. We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

 

For an explanation of why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

 

Doing it All: The Internet of Things and the Data Tsunami

by Dawn Richcreek 7. August 2018 15:44

“If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement…”

For a while there, it looked like corporate IT resource planning was going to be easy. Organizations would move practically everything to the cloud, lean on their cloud service suppliers to maintain performance, cut back on operating expenses for local computing, and reduce—or at least stabilize—overall cost.

Unfortunately, that prediction didn’t reckon with the Internet of Things (IoT), which, in terms of both size and importance, is exploding.

What’s the “edge?”

It varies. To a telecom, the edge could be a cell phone, or a cell tower. To a manufacturer, it could be a machine on a shop floor. To a hospital, it could be a pacemaker. What’s important is that edge computing allows data to be analyzed in near real time, allowing actions to take place at a speed that would be impossible in a cloud-based environment. 

(Consider, for example, a self-driving car. The onboard optics spot a baby carriage in an upcoming crosswalk. There isn’t time for that information to be sent upstream to a cloud-based application, processed, and an instruction returned before slamming on the brakes.)

Meanwhile, the need for massive data processing and analytics continues to grow, creating a kind of digital arms race between data creation and the ability to store and analyze it. In the life sciences, for instance, it’s estimated that only 5% of the data ever created has been analyzed.

Condusiv® CEO Jim D’Arezzo was interviewed by App Development magazine (which publishes news to 50,000 IT pros) on this very topic, in an article entitled “Edge computing has a need for speed.” Noting that edge computing is predicted to grow at a CAGR of 46% between now and 2022, Jim said, “If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement. For that to happen, your I/O capacity and SQL performance need to be optimized. And, given the realities of edge computing, so do your desktops and laptops.”

At Condusiv, we’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

I’m a MEDITECH Hospital with SSDs, Is FAL Growth Still an Issue that Risks Downtime?

by Brian Morin 4. December 2017 07:34

Now that many MEDITECH hospitals have gone all-flash for their backend storage, one of the most common questions we field is whether or not there is still downtime risk from the File Attribute List (FAL) growth issue if the data physically lives on solid-state drives (SSDs).

The main reason this question comes up is because MEDITECH requires “defragmentation,” which most admins insinuate as only being a requirement for a spinning disk backend. That misnomer couldn’t be further from the truth as the FAL issue has nothing to do with the backend media but rather the file system itself. Clearly, defragmentation processes are damaging to solid-state media, which is why MEDITECH hospitals turn to Condusiv’s V-locity® I/O reduction software that prevents fragmentation from occurring in the first place and has special engines designed for MEDITECH environments to remediate the FAL from reaching its size limit and causing unscheduled downtime.

The File Attribute List is a Windows NTFS file metadata structure referred to as the FAL. ThFAstructure capointdifferentypeofilattributessucasecuritattributeostandarinformatiosuch acreatioanmodificatiodateandmosimportantlythactuadatcontainewith ithfileFoexamplethFAkeeps tracowheralthdatifothfileThFAactuallcontains pointers to file records thaindicatthlocatioothfildatothvolumeIthadathatbe storeadifferenlogic allocationothvolume (i.e.fragmentation), morpointerarrequired. This iturincreasethsizothFALHereiliethproblemthFAL sizhaauppelimitatioo256KB which is comprised of 8192 attribute entriesWhethalimiireachednmorpointercan be added, whicmeans Nmore datcan baddetthfileAnd, iit is a foldefile whickeeps tracoalthfiletharesidundethafolderNmorfilecabaddeundethafoldefile. Once this occurs, the application crashes, leading to a best case scenario of several hours of unscheduled downtime to resolve.

Although this blog points out MEDITECH customers experiencing this issue, we have seen this FAL problem occur within non-MEDITECH environments like MS-Exchange and MS-SQL, with varying types of backend storage media from HDDs to all-flash arrays. So, what can be done about it?

The logical solution would be–why not just defragment the volume? Wouldn’t that decrease the number of pointers and decrease the FAL size? The problem is that traditional defragmentation actually causes the FAL to grow in size! While it can decrease the number of pointers, it will not decrease the FAL size, but in fact, it can cause the FAL size to groeven larger, making the problem worse even though you are attempting to remediate it.

The only proprietary solution to solve this problem is by using Condusiv’s V-locity® for virtual servers or Diskeeper® Server for physical servers. Included is a special technology called MediWrite®, which helps suppress this issue from occurring in the first place and provides special handling if it has already occurred. MediWrite includes:

>Unique FAL handling: As indicated above,traditional methods of defragmentation will cause the FAL to groeven further in size. MediWrite will detect when files have FAL size issues and will use proprietary methods to prevent FAL growth. This is the only engine of its kind in the industry.

>Unique FAL safe file movement:  V-locity and Diskeeper’s free space consolidation engines automatically detect FAL size issues and automatically deploy the MediWrite feature to resolve.

>Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing new fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

>Unique Offline FAL Consolidation tool: Any MEDITECH hospital that already has an FAL issue can use the embedded offline tool to shrink the FAL-IN-USE size in a very short time (~5 min) as opposed to manual processes that take several hours.

>V-locity and Diskeeper have been endorsed by MEDITECH. Click Here to view.

 

 

MEDITECH Hospital Speeds EHR & MS-SQL with V-locity® I/O Reduction Software

by Brian Morin 28. August 2017 10:06

Community Medical Center (CMC) had one initial requirement – find a FAL remediation solution for their MEDITECH electronic health record (EHR) application to maintain 24/7 availability and avoid downtime. What surprised them the most was the extent of the performance boost from using V-locity I/O reduction software.

“Our doctors and clinicians were losing too much time on basic tasks like waiting on medical images to load, or scanning images, or even just navigating from screen to screen within the application. The easy answer is to buy new server and storage hardware; however, that’s also a very expensive answer. When you’re a small hospital, you need to squeeze every last drop of performance out of your existing infrastructure. Since we don’t have the budget luxury of doing hardware refreshes every three years, we need to get at least five years or more from our storage backend,” said Joe Buckminster, IT Director, Community Medical Center.

Buckminster continued, “We initially purchased V-locity I/O reduction software to meet an availability requirement, but what surprised us the most was how much value it to added to our aging storage infrastructure by offloading a significant amount of I/O traffic. Not only did we get an immediate performance boost for MEDITECH, but we soon realized that we needed to try V-locity on our other Tier-1 applications like NextGen, MS-SQL, MS Exchange, Citrix XenApp, and others.”

Joe identified 35 key virtual servers that ran an assortment of different applications, like NextGen EHR (supported by a MS-SQL database), MS Exchange, Citrix XenApp, GE Centricity Perinatal, and others. In aggregate, V-locity offloaded 43% of all read traffic from storage and 29% of write traffic. With well over half a billion I/Os eliminated from going to storage, the median latency savings meant an aggregate of 157 days of cumulative storage I/O time saved across all the servers over a three-month period. When examining the last 24 hours from CMC’s single heaviest workload on a MS-SQL server, V-locity offloaded 48,272,115 I/O operations from storage (48% of read traffic / 47% of write traffic) – a savings of seven hours in storage I/O time

“There’s no way we would have achieved a 5-year lifecycle on our storage system without V-locity offloading so much I/O traffic from that subsystem. We had no idea how many I/O operations from virtual server to storage were essentially wasted activity due to Windows write inefficiencies chewing up IOPS or hot data that is more effectively served from available DRAM,” said Buckminster.

 

To read the full story on how V-locity I/O reduction software boosted their EHR and MS-SQL performance, read here: http://learn.condusiv.com/rs/246-QKS-770/images/CS-Community-Medical.pdf

Tags:

MEDITECH | V-Locity

FAL Remediation and Improved Performance for MEDITECH

by Gary Quan 28. November 2016 12:11

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EMR systems.

Some refer to this problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has an upper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes Windows. So what can be done about it?

 

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest version of V-locity®  for virtual servers and Diskeeper® for physical servers. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reduce the FAL size. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use a proprietary method of defragmentation that keeps the FAL from growing in size. An industry first!

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important, patented technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For more information, see our MEDITECH Solution Brief

 

 

 

Tags:

Diskeeper | Disruption, Application Performance, IOPS | General | IntelliMemory | IntelliWrite | MEDITECH | V-Locity

RecentComments

Comment RSS

Month List

Calendar

<<  August 2018  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar