Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Is Fragmentation Robbing SAN Performance?

by Brian Morin 16. March 2015 09:39

This month Condusiv® announced the most significant development in the Diskeeper® product line to date – expanding our patented fragmentation prevention capabilities beyond server local storage or direct-attached storage (DAS) to now include Storage Area Networks, making it the industry's first real-time fragmentation solution for SAN storage.

Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer.

As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.

In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.

Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.

How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.

Diskeeper 15 Server prevents fragmentation from occurring in the first place at the file system layer. That means Windows will write files in a more contiguous or sequential fashion to the logical disk. Instead of breaking a file into 20 pieces that needs 20 separate I/O operations for every write and subsequent read, it will write that file in a more contiguous fashion so only minimal I/O is required.

Perhaps the best way to illustrate this is with a traffic analogy. Bottlenecks occur where freeways intersect. You could say the problem is not enough lanes (throughput) or the cars are too slow (IOPS), but we’re saying the easiest problem to solve is the fact of only one person per car!

By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload – by simply filling the “car” with more people. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.

When openBench Labs tested Diskeeper Server, they found throughput increased 1.3X. That is, from 75.1 MB/sec to 100 MB/sec. A manufacturing company saw their I/O density increase from 24KB to 45KB. This eliminated 400,000 I/Os per server per day, and the IT Director said it "eliminated any lag during peak operation."

Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O. By writing files in a more sequential fashion, the number of I/Os required to process a GB of data drops significantly so more data can be processed in less time.

Keep in mind, this is not just true for SANs with HDDs but SSDs as well. In a SAN environment, the Windows OS isn’t aware of the physical layer or storage media being used. The I/O overhead from splitting files apart at the logical disk means just as many unnecessary IOPS to SSD as HDD. SSD is only processing that inefficient I/O more quickly than a hard disk drive.

Diskeeper 15 Server is not a "defrag" utility. It doesn’t compete with the SAN for management of the physical layer by instructing the RAID controllers on the how to manage the data. Diskeeper’s patented proactive approach is the perfect complement to a SAN by ensuring only productive I/O is processed from server to storage to keep physical servers and SAN storage running like new.

With organizations spending tens of thousands of dollars on server and storage hardware and even hundreds of thousands of dollars on large SSD deployments, why give 25% or more performance over to fragmentation when it can be prevented altogether for a mere $400 per physical server at our lowest volume tier?

Try Diskeeper 15 Server for 30 Days ->

Four Reasons to Migrate from Diskeeper Server to V-locity Server

by Robert Woolery 30. July 2013 08:19

Still on Diskeeper Server? Here’s four reasons to consider migrating to V-locity Server: 

1. High performance. Whereas Diskeeper® Server, highlighted by IntelliWrite® technology, keeps Windows servers running like new, V-locity® Server goes a step beyond split I/O elimination with the inclusion of a server-side caching engine (IntelliMemory) for performance boosts of 50% or more. With frequently-accessed data dynamically cached within available server resources, hot data no longer trudges the full distance from server to storage and back, consuming unnecessary bandwidth.

With IntelliWrite preventing split I/Os on write requests, and IntelliMemory caching active data on reads, this holistic approach to I/O optimization accelerates the entire IT infrastructure since unnecessary I/O traffic is now eliminated before it is pushed through server, network and storage. 
 

2. Network storage. Whereas Diskeeper Server is ideal for local server storage or direct-attached storage (DAS), V-locity Server is designed for network storage (SAN/NAS) since all I/O optimization occurs at the Windows OS layer, leaving the storage device untouched. With IntelliWrite, V-locity Server proactively eliminates split I/Os as close to the application as possible, and by caching active data within available server memory, IntelliMemory eliminates even more unnecessary I/O—preventing I/O traffic from traveling the full distance to storage and back. Since the storage subsystem is now processing considerably less I/O, bottlenecks are eliminated and more bandwidth is available. 

3. Solid-state storage. Already running solid-state in your storage arrays or server PCI-E? V-locity sits at the top of the technology stack at the Windows OS layer so the entire infrastructure—regardless of vendor—reaps the benefit of I/O optimization downstream. V-locity is proactive—meaning it prevents the surplus of unnecessary I/O from ever being created in the first place. This way, your SSD or HDD media isn’t dealing with the I/O mess after it has already wreaked havoc on your environment.

4. Benefit analysis. Unlike Diskeeper, V-locity comes with an embedded performance benchmark that allows users to see the before/after benefit of V-locity in their real-world environment and share the outcome with stakeholders prior to any kind of purchase commitment. This single-page report provides metrics like workload comparison, I/Os per second, latency, and more. 

For high-performance in environments that leverage advanced storage technologies, V-locity Server is the best bet to maximize your existing hardware investment and eliminate performance bottlenecks overnight.

Windows 8 Released

by Alex Klein 29. October 2012 05:35

Microsoft officially released the next version of Windows last week – Windows 8. While this new release contains various technological advancements, issues with I/O performance and its effect on Windows systems still remains.

Every I/O operation that occurs takes a measureable amount of time. There’s no such thing as an instant I/O request, and simply put, the more I/Os necessary, the longer it will take for Windows to complete a particular task. 

To understand why this is still an issue on Windows 8 and even Windows Server 2012, let’s explore a bit deeper. When data is written within the Windows file system, it is naturally written in a non-optimized way. Thus when an application requests the data, the initial I/O request generally gets broken down and  splits into many additional requests (called split I/Os), and thus increases the time necessary to retrieve the information. So, as this activity naturally occurs on a daily basis, it takes more and more I/O requests and increasingly impacts the performance of your servers and workstations. 

The Windows built-in optimization tool, which is set to run on a weekly basis, attempts to handle the mounting I/O traffic, but that’s after you’ve already experienced the performance impact in the first place. For example, say I’m working on a project on a Tuesday afternoon – how is running the built-in optimization utility on Wednesday going to address this concern?

Proactive Windows I/O acceleration is the key to successful operations and improved response time to users and this is why Condusiv Technologies created our Diskeeper product. Diskeeper’s InvisiTasking and IntelliWrite technologies helps prevent the vast majority of extra I/O requests from occurring and does so without taking any additional resources from the system or other applications. This ensures that you get the least number of I/Os to go to the storage and allows your applications to run that much faster. 

 
In fact, recent independent testing by openBench labs shows up to 98% few I/O requests, server throughput increased by 130% and data throughput up to 5X faster on workstations. You can read more of this report here.

SSDs and Defrag

by Alex Klein 3. August 2012 06:32

We recently responded to a forum post on our YouTube channel regarding SSDs and Defragmentation - you can view the video here: http://www.youtube.com/watch?v=hznCSqb4Mzg


Below are some "before and after" graphs that provide proof that fragmentation affects SSDs:

 

Tags: , , ,

Defrag | Diskeeper | SSD, Solid State, Flash | Windows 7

The Secret to Optimizing the Big Virtual Data Explosion

by Alex Klein 29. May 2012 09:21
In today’s day and age, many SMBs and enterprise-level businesses are “taking to the skies” with cloud computing. These companies realize that working in the cloud comes with many benefits – including reduced cost of in-house hardware, ease of implementation and seamless scalability. However, as you will read on and discover - performance-impacting file fragmentation and the need for defragmentation still exists and is actually amplified in these environments. Based on these factors, it must now be addressed with a two-fold proactive and preventative solution.

Let’s face it – we generate a tremendous amount of data and it’s only the beginning. In fact, findings included in a recent study by IDC titled “Extracting Value from Chaos” predict that in the next ten years we will create 50 times more information and 75 times more files. Now regardless of destination, most of this data is generated on Windows-based computers, which are known to fragment files. Therefore, when files are manipulated they become fragmented before even reaching the cloud. This occurs because as they are worked with, they get broken up into various pieces and scattered to numerous locations across the hard disk. The result is increased time necessary to access these files and affects system performance.

So how does the above scenario affect the big picture? To understand this, let’s take a closer look at your cloud environment. Your data, and in many cases, much of your infrastructure, has “gone virtual”. Users are able to access applications and work with their data basically anywhere in the world. In such an atmosphere, where the amount of RAM and CPU power available is dramatically increased and files are no longer stored locally, how can the need for defragmentation still be an issue?

Well, what do you think happens when all this fragmented data comes together? The answer is an alarming amount of fragmented Big Data that’s now sitting on the hard drives of your cloud solution. This causes bottlenecks that can severely impact your mission-critical applications due to the large-scale unnecessary I/O cycles needed to process the broken up information.

At the end of the day, traditional approaches to defragmentation just aren’t going to cut it anymore and it’s going take the latest software technology implemented on both sides of the cloud to get these issues resolved. It starts with software, such as Diskeeper 12, installed on every local workstation and server, to prevent fragmentation at its core. Added to this is deploying V-locity software across your virtualized network. This one-two punch defragmentation software solution addresses I/O performance concerns, optimizes productivity and will push cloud computing further than you ever thought possible. In these exciting times of emerging new technologies, Cloud computing can send your business soaring or keep it grounded - the choice is up to you.

Tags:

Big Data | Cloud | Defrag | Diskeeper | virtualization | V-Locity

RecentComments

Comment RSS

Month List

Calendar

<<  March 2019  >>
MoTuWeThFrSaSu
25262728123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar