Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI

by Dawn Richcreek 27. November 2018 05:27

When the New Mexico Mortgage Finance Authority decided to better support their users by moving away from using physical PCs and migrating to a virtual desktop infrastructure, the challenge was to ensure the fastest possible user experience from their Horizon View VDI implementation.

“Anytime an organization starts talking about VDI, the immediate concern in the IT shop is how well we will be able to support it from a performance standpoint to ensure a pristine end user experience. Although supported by EMC VNXe flash storage with high IOPS, one of our primary concerns had to do with Windows write inefficiencies that chews up a large percentage of flash IOPS unnecessarily. When you’re rolling out a VDI initiative, the one thing you can’t afford to waste is IOPS,” said Joseph Navarrete, CIO, MFA.

After Joseph turned to Condusiv’s “Set-It-and-Forget-It®” V-locity® I/O reduction software and bumped up the memory allocation for his VDI instances, V-locity was able to offload 40% of I/O from storage resulting in a much faster VDI experience to his users. When he demo’d V-locity on his MS-SQL server instances, V-locity eliminated 39% of his read I/O traffic from storage due to DRAM read caching and another 40% of write I/O operations by solving Windows write inefficiencies at the source.

After seeing the performance boost and increased efficiency to his hardware stack, Joseph ensured V-locity was running across all his systems like MS-Exchange, SharePoint, and more.

“With V-locity I/O reduction software running on our VDI instances, users no longer have to wait extra time. The same is now true for our other mission critical applications like MS-SQL. The dashboard within the V-locity UI provides all the necessary analytics about our environment and view into what the software is actually doing for us. The fact that all of this runs quietly in the background with near-zero overhead impact and no longer requires a reboot to install or upgrade makes the software truly “set and forget,” said Navarrete.

 

Read the full case study                        Download 30-day trial

Industry-first FAL Remediation and Improved Performance for MEDITECH

by Gary Quan 6. November 2018 03:19

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EHR systems.

In fact, because of this issue, MEDITECH requires all 5x and 6x customers to address this issue and has endorsed both Condusiv® Technologies’ V-locity® and Diskeeper® I/O reduction software for “...their ability to reduce disk fragmentation and eliminate File Attribute List (FAL) saturation. Because of their design and feature set, we have also observed they accelerate application performance in a measurable way,” said Mike Belkner, Associate VP, Technology, MEDITECH.

Some refer to this extreme fragmentation problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has an upper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file. Applications using these files stop in their tracks, not what users want, especially in EHR systems.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes with Windows. So what can be done about it?

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest V-locity® and Diskeeper® product lineup. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reducing the FAL size offline. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the

FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use an exclusive method of defragmentation that helps stem the FAL growth. An industry first!

It will also automatically determine how often to process these files according to their FAL size severity.

Enhanced Free space consolidation engine: One indirect cause of FAL size growth is the extreme free space fragmentation found in these cases. A new Free Space method has been developed to handle these extreme cases.

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further

FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For information, case studies, white papers and more, visit  http://www.condusiv.com/solutions/meditech-solutions/

Fix SQL Server Storage Bottlenecks

by Spencer Allingham 23. October 2018 20:58

No SQL code changes.
No Disruption.
No Reboots.
Simple!

 

Condusiv V-locity Introduction

 

 

Whether running SQL in a physical or virtualized environment, most SQL DBAs would welcome faster storage at a reasonable price.

The V-locity® software from Condusiv® Technologies is designed to provide exactly that, but using the storage hardware that you already own. It doesn't matter if you have direct attached disks, if you're running a tiered SAN, have a tray of SSD storage or are fortunate enough to have an all-flash array; that storage layer can be a limiting factor to your SQL Server database productivity.

The V-locity software reduces the amount of storage I/O traffic that has to go out and be processed by the disk storage layer, and streamlines or optimizes the data which does have to still go out to disk.

The net result is that SQL can typically get more transactions completed in the same amount of time, quite simply because on average, it's not having to wait so much on the storage before being able to get on with its next transaction.

V-locity can be downloaded and installed without any disruption to live SQL servers. No SQL code changes are required and no reboots. Just install and typically you'll start seeing results in just a few minutes.

Microsoft SQL Server I/O Reliability Certification LogoBefore we take a more in-depth look at that, I would like to briefly mention that last year, the V-locity software was awarded the Microsoft SQL Server I/O Reliability Certification. This means that whilst providing faster storage access, V-locity didn't adversely affect the required and recommended behaviors that an I/O subsystem must provide for SQL Server, as defined by Microsoft themselves.

Microsoft ran tests for this in Azure, with SQL 2016, and used HammerDB to generate an online transaction processing type workload. Not only was V-locity able to jump through all the hoops necessary to achieve the certification, but it was also able to show an increase of about 30% more SQL transactions in the same amount of time.

In this test, that meant roughly 30% more orders processed.

They probably could have processed more too, if they had allowed V-locity a slightly larger RAM cache size.

To get more information, including best practise for running V-locity on MS SQL servers, easy ways to validate results, customer case studies and more, click here for the full article on LinkedIn.

If you simply want to try V-locity, click here for a free trial.

Use the V-locity software to not only identify those servers that cause storage I/O issues, but fix those issues at the same time.

Cultech Limited Solves ERP and SQL Troubles with Diskeeper 18 Server

by Spencer Allingham 8. October 2018 09:11

Before discovering Diskeeper®, Cultech Limited experienced sluggish ERP and SQL performance, unnecessary downtime, and lost valuable hours each day troubleshooting issues related to Windows write inefficiencies.

As an internationally recognized innovator and premium quality manufacturer within the nutritional supplement industry, the usual troubleshooting approaches just weren’t cutting it. “We were running a very demanding ERP system on legacy servers and network. A hardware refresh was the first step in troubleshooting our issues. As much as we did see some improvement, it did not solve the daily breakdowns associated with our Sage ERP,” said Rob, IT Manager, Cultech Limited.

After upgrading the network and replacing ERP and SQL servers and not seeing much improvement, Rob further dug into troubleshooting approaches and SQL optimizations. With months of troubleshooting and SQL optimizations and no relief, Rob continued to research and find a way to improve performance issues, knowing that Cultech could not continue to interrupt productivity multiple times a day to fix corrupted records. As Rob explains, “I was on support calls with Sage literally day and night to solve issues that occurred daily. Files would not write properly to the database, and I would have to go through the tedious process of getting all users to logout of Sage then manually correct the problem – a 25-min exercise. That might not be a big deal every so often, but I found myself doing this 3-4 times a day at times.”

In doing his research, Rob found Condusiv’s® Diskeeper Server and decided to give it a try after reading customer testimonials on how it had solved similar performance issues. To Cultech’s surprise, after just 24-hours of being installed, they were no longer calling Sage support. “I installed Diskeeper and crossed my fingers, hoping it would solve at least some of our problems. It didn’t just solve some problems, it solved all of our problems. I was calling Sage support daily then suddenly I wasn’t calling them at all,” said Rob. Problems that Rob was having to fix outside of production hours had been solved thanks to Diskeeper’s ability to prevent fragmentation from occurring. And in addition to recouping hours a day of downtime during production hours, Cultech was now able to focus this time and energy on innovation and producing quality products.

“Now that we have Diskeeper optimizing our Sage servers and SQL servers, we have it running on our other key systems to ensure peak performance and optimum reliability. Instead of considering Windows write inefficiencies as a culprit after trying all else, I would encourage administrators to think of it first,” said Rob.

Read the full case study                        Download 30-day trial

Big Data Boom Brings Promises, Problems

by Dawn Richcreek 7. September 2018 04:40

By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed by this “data tsunami”? Will the sheer ability to manage—or even to physically house—all this information become a problem?

Condusiv CEO Jim D’Arezzo, in a recent discussion with Supply Chain Brain, commented that “As it has over the past 40 years, technology will become faster, cheaper, and more expansive; we’ll be able to store all the data we create. The challenge, however, is not just housing the data, but moving and processing it. The components are storage, computing, and network. All three need to be optimized; I don’t see any looming insurmountable problems, but there will be some bumps along the road.”

One example is healthcare. Speaking with Healthcare IT News, D’Arezzo noted that there are many new solutions open to healthcare providers today. “But with all the progress,” he said, “come IT issues. Improvements in medical imaging, for instance, create massive amounts of data; as the quantity of available data balloons, so does the need for processing capability.”

Giving health-care providers—and professionals in other areas—the benefits of the data they collect is not always easy. In an interview with Transforming Data with Intelligence, D’Arezzo said, “Data center consolidation and updating is a challenge. We run into cases where organizations do consolidation on a ‘forklift’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often discover that performance has degraded. A bottleneck has been created that needs to be handled with optimization.”

The news is all over it. You are experiencing it. Big data. Big problems. At Condusiv®, we get it.  We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware. The tsunami of data—we’ve got you covered.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

RecentComments

Comment RSS

Month List

Calendar

<<  December 2018  >>
MoTuWeThFrSaSu
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar