Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Industry-first FAL Remediation and Improved Performance for MEDITECH

by Gary Quan 6. November 2018 03:19

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EHR systems.

In fact, because of this issue, MEDITECH requires all 5x and 6x customers to address this issue and has endorsed both Condusiv® Technologies’ V-locity® and Diskeeper® I/O reduction software for “...their ability to reduce disk fragmentation and eliminate File Attribute List (FAL) saturation. Because of their design and feature set, we have also observed they accelerate application performance in a measurable way,” said Mike Belkner, Associate VP, Technology, MEDITECH.

Some refer to this extreme fragmentation problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has an upper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file. Applications using these files stop in their tracks, not what users want, especially in EHR systems.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes with Windows. So what can be done about it?

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest V-locity® and Diskeeper® product lineup. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reducing the FAL size offline. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the

FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use an exclusive method of defragmentation that helps stem the FAL growth. An industry first!

It will also automatically determine how often to process these files according to their FAL size severity.

Enhanced Free space consolidation engine: One indirect cause of FAL size growth is the extreme free space fragmentation found in these cases. A new Free Space method has been developed to handle these extreme cases.

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further

FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For information, case studies, white papers and more, visit  http://www.condusiv.com/solutions/meditech-solutions/

When It Really NEEDS To Be Deleted

by Dawn Richcreek 26. October 2018 04:53

In late May of this year, the European Union formally adopted an updated set of rules about personal data privacy called the General Data Protection Regulation. Condusiv CEO Jim D’Arezzo, speaking with Marketing Technology Insights, said, “Penalties for noncompliance with GDPR are severe. They can be as much as 4% of an offending company’s global turnover, up to a total fine of 20 million.” 

A key provision of GDPR is the right to be forgotten, which enables any European citizen to have his or her name and identifying data permanently removed from the archives of any firm holding that data in its possession. One component of the right to be forgotten, D’Arezzo notes, is called “right to erasure,” which requires that the data be permanently deleted, i.e. irrecoverable.

Recently, the EU government has begun cracking down on international enterprises, attempting to extend the EU’s right-to-erasure laws to all websites, regardless of where the traffic originates. Many affected records consist not of fields or records in a database, but of discrete files in formats such as Excel or Word. 

So to stay compliant with GDPR—which, the EU being the world’s largest market and twenty million euros being a lot of money—you need to be able to delete a file to the point that you can’t get it back. On the other hand, files get deleted by accident or mistake all the time; unless you want to permanently cripple your data archive, you need to be able to get those files back (quickly and easily).

In other words, you need a two-edged sword. For Windows-based systems, that’s exactly what’s provided by our Undelete® product line. Up to a point, any deleted file or version of an Office file can be easily restored, even if it was deleted before Undelete was installed.

If, however—as in the case of a confirmed “right to erasure” request—you need to delete it forever, you use Undelete’s SecureDelete® feature. Using specific bit patterns specified by the US National Security Agency, SecureDelete will overwrite the file to help make it unrecoverable. A second feature, Wipe Free Space, will overwrite any free space on a selected volume, using the same specific bit patterns, to clear out any previously written data in that free space.

So with Undelete, you’re covered both ways. Customers buy it for its recovery abilities: you need to be able to hit the “oops” button and get a file back. But it can also handle the job when you need to make sure a file is gone.

 

"No matter how redundant my backups are, how secure our security is, I will always have the one group of users that manage to delete that one critical file. I have found Undelete to be an invaluable tool for just such an occasion. This software has saved us both time and money. When we migrated from a Novell Infrastructure, we needed to find a solution that would allow us to restore ‘accidentally’ deleted data from a network share. Since installing Undelete on all my servers, we have had no lost data due to accidents or mistakes."
–Juan Saldana II, Network Supervisor, Keppel AmFELS Juan Saldana II,
Network Supervisor, Keppel AmFELS

 

For Undelete help with servers or virtual systems, click Undelete Server

To save money with Undelete on Business PCs, click here Undelete Professional

You can purchase Undelete immediately online or download a free 30-day trial.

Tags:

File Protection | File Recovery | General | Undelete | Windows 7 | Windows 8 | Windows Server 2012

Fix SQL Server Storage Bottlenecks

by Spencer Allingham 23. October 2018 20:58

No SQL code changes.
No Disruption.
No Reboots.
Simple!

 

Condusiv V-locity Introduction

 

 

Whether running SQL in a physical or virtualized environment, most SQL DBAs would welcome faster storage at a reasonable price.

The V-locity® software from Condusiv® Technologies is designed to provide exactly that, but using the storage hardware that you already own. It doesn't matter if you have direct attached disks, if you're running a tiered SAN, have a tray of SSD storage or are fortunate enough to have an all-flash array; that storage layer can be a limiting factor to your SQL Server database productivity.

The V-locity software reduces the amount of storage I/O traffic that has to go out and be processed by the disk storage layer, and streamlines or optimizes the data which does have to still go out to disk.

The net result is that SQL can typically get more transactions completed in the same amount of time, quite simply because on average, it's not having to wait so much on the storage before being able to get on with its next transaction.

V-locity can be downloaded and installed without any disruption to live SQL servers. No SQL code changes are required and no reboots. Just install and typically you'll start seeing results in just a few minutes.

Microsoft SQL Server I/O Reliability Certification LogoBefore we take a more in-depth look at that, I would like to briefly mention that last year, the V-locity software was awarded the Microsoft SQL Server I/O Reliability Certification. This means that whilst providing faster storage access, V-locity didn't adversely affect the required and recommended behaviors that an I/O subsystem must provide for SQL Server, as defined by Microsoft themselves.

Microsoft ran tests for this in Azure, with SQL 2016, and used HammerDB to generate an online transaction processing type workload. Not only was V-locity able to jump through all the hoops necessary to achieve the certification, but it was also able to show an increase of about 30% more SQL transactions in the same amount of time.

In this test, that meant roughly 30% more orders processed.

They probably could have processed more too, if they had allowed V-locity a slightly larger RAM cache size.

To get more information, including best practise for running V-locity on MS SQL servers, easy ways to validate results, customer case studies and more, click here for the full article on LinkedIn.

If you simply want to try V-locity, click here for a free trial.

Use the V-locity software to not only identify those servers that cause storage I/O issues, but fix those issues at the same time.

Cultech Limited Solves ERP and SQL Troubles with Diskeeper 18 Server

by Spencer Allingham 8. October 2018 09:11

Before discovering Diskeeper®, Cultech Limited experienced sluggish ERP and SQL performance, unnecessary downtime, and lost valuable hours each day troubleshooting issues related to Windows write inefficiencies.

As an internationally recognized innovator and premium quality manufacturer within the nutritional supplement industry, the usual troubleshooting approaches just weren’t cutting it. “We were running a very demanding ERP system on legacy servers and network. A hardware refresh was the first step in troubleshooting our issues. As much as we did see some improvement, it did not solve the daily breakdowns associated with our Sage ERP,” said Rob, IT Manager, Cultech Limited.

After upgrading the network and replacing ERP and SQL servers and not seeing much improvement, Rob further dug into troubleshooting approaches and SQL optimizations. With months of troubleshooting and SQL optimizations and no relief, Rob continued to research and find a way to improve performance issues, knowing that Cultech could not continue to interrupt productivity multiple times a day to fix corrupted records. As Rob explains, “I was on support calls with Sage literally day and night to solve issues that occurred daily. Files would not write properly to the database, and I would have to go through the tedious process of getting all users to logout of Sage then manually correct the problem – a 25-min exercise. That might not be a big deal every so often, but I found myself doing this 3-4 times a day at times.”

In doing his research, Rob found Condusiv’s® Diskeeper Server and decided to give it a try after reading customer testimonials on how it had solved similar performance issues. To Cultech’s surprise, after just 24-hours of being installed, they were no longer calling Sage support. “I installed Diskeeper and crossed my fingers, hoping it would solve at least some of our problems. It didn’t just solve some problems, it solved all of our problems. I was calling Sage support daily then suddenly I wasn’t calling them at all,” said Rob. Problems that Rob was having to fix outside of production hours had been solved thanks to Diskeeper’s ability to prevent fragmentation from occurring. And in addition to recouping hours a day of downtime during production hours, Cultech was now able to focus this time and energy on innovation and producing quality products.

“Now that we have Diskeeper optimizing our Sage servers and SQL servers, we have it running on our other key systems to ensure peak performance and optimum reliability. Instead of considering Windows write inefficiencies as a culprit after trying all else, I would encourage administrators to think of it first,” said Rob.

Read the full case study                        Download 30-day trial

Why Faster Storage May NOT Fix It

by Rick Cadruvi, Chief Architect 20. September 2018 04:58

 

With all the myriad of possible hardware solutions to storage I/O performance issues, the question that people are starting to ask is something like:

         If I just buy newer, faster Storage, won’t that fix my application performance problem?

 The short answer is:

         Maybe Yes (for a while), Quite Possibly No.

I know – not a satisfying answer.  For the next couple of minutes, I want to take a 10,000-foot view of just three issues that affect I/O performance to shine some technical light on the question and hopefully give you a more satisfying answer (or maybe more questions) as you look to discover IT truth.  There are other issues, but let’s spend just a moment looking at the following three:

1.     Non-Application I/O Overhead

2.     Data Pipelines

3.     File System Overhead

These three issues by themselves can create I/O bottlenecks causing degradation to your applications of 30-50% or more.

Non-Application I/O Overhead:

One of the most commonly overlooked performance issues is that an awful lot of I/Os are NOT application generated.  Maybe you can add enough DRAM and go to an NVMe direct attached storage model and get your application data cached at an 80%+ rate.  Of course, you still need to process Writes and the NVMe probably makes that a lot faster than what you can do today.  But you still need to get it to the Storage.  And, there are lots of I/Os generated on your system that are not directly from your application.  There’s also lots of application related I/Os that are not targeted for caching – they’re simply non-essential overhead I/Os to manage metadata and such.  People generally don’t think about the management layers of the computer and application that have to perform Storage I/O just to make sure everything can run.  Those I/Os hit the data path to Storage along with the I/Os your application has to send to Storage, even if you have huge caches.  They get in the way and make your Application specific I/Os stall and slow down responsiveness.

And let’s face it, a full Hyper-Converged, NVMe based storage infrastructure sounds great, but there are lots of issues besides the enormous cost with that.  What about data redundancy and localization?  That brings us to issue # 2.

Data Pipelines: 

Since your data is exploding and you’re pushing 100s of Terabytes, perhaps Petabytes and in a few cases maybe even Exabytes of data, you’re not going to get that much data on your one server box, even if you didn’t care about hardware/data failures.  

Like it or not, you have an entire infrastructure of Servers, Switches, SANs, whatever.  Somehow, all that data needs to get to and from the application and wherever it is stored.  And if you add Cloud storage into the mix, it gets worse. At some point the data pipes themselves become the limiting factor.  Even with Converged infrastructures, and software technologies that stage data for you where it is supposedly needed most, data needs to be constantly shipped along a pipe that is nowhere close to the speed of access that your new high-speed storage can handle.  Then add lots of users and applications simultaneously beating on that pipe and you can quickly start to visualize the problem.

If this wasn’t enough, there are other factors and that takes us to issue #3.

File System Overhead:

You didn’t buy your computer to run an operating system.  You bought it to manipulate data.  Most likely, you don’t even really care about the actual application.  You care about doing some kind of work.  Most people use Microsoft Word to write documents.  I did to draft this blog.  But I didn’t really care about using Word.  I cared about writing this blog and Word was something I had, I knew how to use and was convenient for the task.  That’s your application, but manipulating the data is your real conquest.  The application is a tool to allow you to paint a beautiful picture of your data, so you can see it and accomplish your job better.

The Operating System (let’s say Windows), is one of a whole stack of tools between you, your application and your data.  Operating Systems have lots of layers of software to manage the flow from your user to the data and back.  Storage is a BLOB of stuff.  Whether it is spinning hard drives, SSDs, SANs, cloud-based storage, or you name it, it is just a canvas where the data can be stored.  One of the first strokes of the brush that will eventually allow you to create that picture you want from your data is the File System.  It brings some basic order.  You can see this by going into Windows File Explorer and perusing the various folders.  The file system abstracts that BLOB into pieces of data in a hierarchical structure with folders, files, file types, information about size/location/ownership/security, etc... you get the idea.  Before the painting you want to see from your data emerges, a lot of strokes need to be placed on the canvas and a lot of those strokes happen from the Operating and File Systems.  They try to manage that BLOB so your Application can turn it into usable data and eventually that beautiful (we hope) picture you desire to draw. 

Most people know there is an Operating System and those of you reading this know that Operating Systems use File Systems to organize raw data into useful components.  And there are other layers as well, but let’s focus.  The reality is there are lots of layers that have to be compensated for.  Ignoring file system overhead and focusing solely on application overhead is ignoring a really big Elephant in the room.

Let’s wrap this up and talk about the initial question.  If I just buy newer, faster Storage won’t that fix my application performance?  I suppose if you have enough money you might think you can.  You’ll still have data pipeline issues unless you have a very small amount of data, little if any data/compute redundancy requirements and a very limited number of users.  And yet, the File System overhead will still get in your way. 

When SSDs were starting to come out, Condusiv® worked with several OEMs to produce software to handle obvious issues like the fact that writes were slower and re-writes were limited in number. In doing that work, one of our surprise discoveries was that when you got beyond a certain level of file system fragmentation, the File System overhead of trying to collect/arrange the small pieces of data made a huge impact regardless of how fast the underlying storage was.  Just making sure data wasn’t broken down into too many pieces each time a need to manipulate it came along provided truly measurable and, in some instances, gave incredible performance gains. 

Then there is that whole issue of I/Os that have nothing to do with your data/application. We also discovered that there was a path to finding/eliminating the I/Os that, while not obvious, made substantial differences in performance because we could remove those out of the flow, thus allowing the I/Os your application wants to perform happen without the noise.  Think of traffic jams.  Have you ever driven in stop and go traffic and noticed there aren’t any accidents or other distractions to account for such slowness?  It’s just too many vehicles on the road with you.  What if you could get all the people who were just out for a drive, off the road?  You’d get where you want to go a LOT faster.  That’s what we figured out how to do.  And it turns out no one else is focused on that - not the Operating System, not the File System, and certainly not your application. 

And then you got swamped with more data.  Okay, so you’re in an industry where regulations forced that decision on you.  Either way, you get the point.  There was a time when 1GB was more storage than you would ever need.  Not too long ago, 1TB was the ultimate.  Now that embedded SSD on your laptop is 1TB.  Before too long, your phone will have 1TB of storage.  Mine has 128GB, but hey I’m a geek and MicroSD cards are cheap.  My point is that the explosion of data in your computing environment strains File System Architectures.  The good news is that we’ve built technologies to compensate for and fix limitations in the File System.

Let me wrap this up by giving you a 10,000-foot view of us and our software.  The big picture is we have been focused on Storage Performance for a very long time and at all layers.  We’ve seen lots of hardware solutions that were going to fix Storage slowness.  And we’ve seen that about the time a new generation comes along, there will be reasons it will still not fix the problem.  Maybe it does today, but tomorrow you’ll overtax that solution as well.  As computing gets faster and storage gets denser, your needs/desires to use it will grow even faster.  We are constantly looking into the crystal ball knowing the future presents new challenges.  We know by looking into the rear-view mirror, the future doesn’t solve the problem, it just means the problems are different.  And that’s where I get to have fun.  I get to work on solving those problems before you even realize they exist.  That’s what turns us on.  That’s what we do, and we have been doing it for a long time and, with all due modesty, we’re really good at it! 

So yes, go ahead and buy that shiny new toy.  It will help, and your users will see improvements for a time.  But we’ll be there filling in those gaps and your users will get even greater improvements.  And that’s where we really shine.  We make you look like the true genius you are, and we love doing it.

  

 

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar