Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

For and By

by karen 23. August 2011 04:29

Diskeeper has long been a popular compliment for design and engineering software systems, including by the the most popular design and enginerring software company itself:

“Diskeeper indeed provided the ROI we expected. We are very satisfied. I have been using your product on my personal desktop and at work for the past few versions. It always proved to be a valuable product for transparent defrag and overall performance. 

Prior to installing Diskeeper our systems were almost unresponsive. This easy deployment was done right away as Diskeeper was much needed. It is a fine product that we rely upon.  It was reported to me that the automatic defrag did not take any visible resources or produce any slower performance which was really appreciated. 

The automatic defrag also helped overall performance greatly on the files sytem access. One of the folders has millions of files in it with no subfolders and it was almost inaccessible before the Diskeeper implementation due to the fragmentation level being very high” 

- Marc-Andre, Autodesk   

Autodesk, Inc., is a leader in 3D design, engineering and entertainment software. Customers across the manufacturing, architecture, building, construction, and media & entertainment industries—including the last 16 Academy Award winners for Best Visual Effects—use Autodesk software to design, visualize, and simulate their ideas before they’re ever built or created. From blockbuster visual effects and buildings that create their own energy to electric cars and the batteries that power them, the work of our 3D software customers is everywhere you look. Since its introduction of AutoCAD software in 1982, Autodesk continues to develop the broadest portfolio of state-of-the-art 3D software for global markets.

Tags:

Diskeeper | Diskeeper | Success Stories | Success Stories

Two Benefits of V-locity on Virtual Platforms

by karen 16. August 2011 03:36

A major concern with virtual guest operating systems in a virtualized environment is the possible resource contention introduced as each virtual guest operates independent of the others.

With respect to storage, Diskeeper Corporation’s V-locity 3.0 virtual platform disk optimizer mitigates over-utilization in two ways:

First, each V-locity Guest installation coordinates resource scheduling with a centrally installed V-locity Host Agent.  These guests automatically discover the V-locity Host Agent following installation. Once connected together in this fashion, V-locity Guests cooperate to complete each of their defragmentation tasks in a manner most efficient to the virtual server's resources as a whole.  

Second, the automatic zeroing of free space feature ensures that unused space on virtual drives is zeroed out and compacted in such a way that when a virtual guest is migrated to a different virtual server via VMotion, only the allocated data is transferred.  This speeds up the VMotion process and decreases the load on the shared storage subsystem.

RainWorx specializes in Online Auction Software.  Our hosted web sites are mostly our Auction Software customers.  As such, every auction listing (for each customer, on each server) has one or more images associated, so we have a very high volume of images being uploaded which could potentially create a lot of fragmentation.

Bill Moller

RainWorx

Tags: , , ,

Success Stories | V-Locity

Diskeeper Extremely Shortened and Kept Backup Times Minimized

by Colleen Toumayan 5. April 2011 05:22

Daiwa Odakyu Construction Co., Ltd.  

Problem 

Daiwa Odakyu Construction Co., Ltd did back up of 15 Servers every day using Symantec Backup Exec. But the problem of latency of backup appeared. Initially the system was designed to finish backup within 3 hours at most. But the time grew to 8 hours. They thought additional hardware was needed but had no additional budget. At that time they discovered the cause of latency was fragmentation. First they tried to solve fragments with the built-in defrag. But it solved only 20%. Next a free defrag software was tried. It proceeded to 35% and shortened backup time by more than 1 hour. But 2 days after, the backup time got back to 8 hours. 

Solution 

Mr. Kawata, manager of the Information System Group of Daiwa Odakyu, found Diskeeper® via the internet and downloaded trialware from SOHEI. Diskeeper solved all fragmentation and shortened the backup times down to 1.3 hours. What’s more, IntelliWrite® prevented fragmentation up to 99%! In accordance with their trial, the system without Diskeeper added 1 hour a day to their backup times. 

 

Back up Time(hour) Number of Fragments Number of fragments prevented
Initial design 3.0 - -
Fragmented 8.5 10,000,000以上 -
After defrag by Diskeeper 1.3 250 3,000,0007,200,000
Improvement 85% 99.9975% (断片化防止率99%)

 “I couldn’t estimate such a big influence of fragmentation in the 50% used volume.  The incredible power of Diskeeper is surprising me. The massive fragmentation I gave up trying to solve is now easily defragmented and prevented by Diskeeper, and then this leads to prevent latency of back up completely.  

“The trial without Diskeeper, which resulted in adding 1 hour a day to backup time, scared me so much. With increasing numbers of files, the backup caused massive fragmentation and gets into a dangerous situation. Less than $1,000 investment in Diskeeper is equal to more than $10,000 worth of Servers. I think Diskeeper is vital for file servers and backup servers."  

“I’m thankful to the developers of this surprising software.” 

System: 

HP DL380G4 Xeon E5430 4GB Memory 410GB SAS RAID10(146GB SA*6

HP StorageWorks MSA60 3.41TB SATA RAID10(750GB SATA*10  

 

Tags:

Defrag | Diskeeper | Success Stories

Fragmentation and Data Corruption

by Michael 31. March 2011 04:54

Diskeeper (data performance for physical systems) and V-locity (optimization for virtual systems) are designed to deliver performance, reliability, longer life and energy savings. Increased performance and saved energy from our software are relatively easy to empirically test and validate. Longer life is a matter of minimizing wear and tear on hard drives (MTTF) and providing an all around better experience for users so they can continue to be productive with aging equipment (rather than frequent hardware refreshes).

Reliability is far more difficult to pinpoint as the variables involved are difficult, if not impossible, to isolate in test cases. We have overwhelming anecdotal evidence from customers in surveys, studies, and success stories that application hangs, freezes, crashes, and the sort are all remedied or reduced with Diskeeper and/or V-locity.

However, there is a reliability "hard ceiling" in the NTFS file system; a point in which fragmentation/file attributes become so numerous reliability is jeopardized. In NTFS, files that hit the proverbial "fan", and spray out into hundreds of thousands and millions of fragments, result in a mess that is well... stinky.

In short, fragmentation can become so severe that it ultimately ends up in data loss/corruption. A Microsoft Knowledge Base article describes this phenomenon. I've posted it below for reference:

A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations.

In this scenario, you may experience one of the following issues:

When you try to copy a file to a new location, you receive the following error message:
In Windows Vista or in later versions of Windows
The requested operation could not be completed due to a file system limitation
In versions of Windows that are earlier than Windows Vista
insufficient system resources exist to complete the requested service
When you try to write to a sparse file from the Application log, Microsoft SQL Server may log an event that resembles the following:
In Windows Vista or in later versions of Windows
Event Type: Information
Event Source: MSSQLSERVER

Description: ...
665(The requested operation could not be completed due to a file system limitation.) to SQL Server during write at 0x000024c8190000, in filename...
In versions of Windows that are earlier than Windows Vista
Event Type: Information
Event Source: MSSQLSERVER

Description: ...
1450(Insufficient system resources exist to complete the requested service.) to SQL Server during write at 0x000024c8190000, in file with handle 0000000000000FE8 ...
When a file is very fragmented, NTFS uses more space to save the description of the allocations that is associated with the fragments. The allocation information is stored in one or more file records. When the allocation information is stored in multiple file records, another structure, known as the ATTRIBUTE_LIST, stores information about those file records. The number of ATTRIBUTE_LIST_ENTRY structures that the file can have is limited.

We cannot give an exact file size limit for a compressed or a highly fragmented file. An estimate would depend on using certain average sizes to describe the structures. These, in turn, determine how many structures fit in other structures. If the level of fragmentation is high, the limit is reached earlier. When this limit is reached, you receive the following error message:

Windows Vista or later versions of Windows:
STATUS_FILE_SYSTEM_LIMITATION The requested operation could not be completed due to a file system limitation

Versions of Windows that are earlier than Windows Vista:
STATUS_INSUFFICIENT_RESOURCES insufficient system resources exist to complete the requested service

Compressed files are more likely to reach the limit because of the way the files are stored on disk. Compressed files require more extents to describe their layout. Also, decompressing and compressing a file increases fragmentation significantly. The limit can be reached when write operations occur to an already compressed chunk location. The limit can also be reached by a sparse file. This size limit is usually between 40 gigabytes (GB) and 90 GB for a very fragmented file.  

WORKAROUND
For files that are not compressed or sparse, the problem can be lessened by running Disk Defragmenter. Running Disk Defragmenter will not resolve this problem for compressed or sparse files.

Tags:

Defrag | Diskeeper | Success Stories | V-Locity

All Around the World (part deux)

by Colleen Toumayan 18. February 2011 07:07

 

“I am the person who proposed Diskeeper a few years ago in our company because we had some people who were complaining about slow machines. Most of the times the problem was related to the hard disks that were non-stop reading/writing. I tried a few times the internal defragmenter; it helped in reducing the slowness of the machine but it was always for a short time. So I looked for a better product and found Diskeeper.  

I made contact with Diskeeper UK and we had the pleasure to deal with an employee who arranged for an evaluation version of Diskeeper and Diskeeper Administrator to test in our company. 

We have a high number of computers, 16000, which are now running smoothly. The IntelliWrite does a good job preventing fragmentation. The number of calls for slow machines have dropped but we never had real measurements about the performance of Diskeeper. I am very curious about Diskeeper 2011 and what it can bring more so than this version. Diskeeper works and it is a good product. The price is also very good." 

Marc Vanderhaegen, SNCB (Société nationale des Chemins de fer belges)

Desktop Management

Brussels, Belgium

Tags:

IntelliWrite | Success Stories

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar