Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Major Performance Improvement after Running Diskeeper on DFS server

by karen 4. May 2011 05:53

“We are now running Diskeeper 2011 EnterpriseServer on our a DFS server which houses all our corporate documents. This server has about 500GB of files and was heavily fragmented prior to installing Diskeeper. 

“Users were experiencing latency issues, login delays (roaming profiles) and sometimes apps would freeze upon saving to that server. After running Diskeeper for a few weeks, there was a major performance improvement. We went from getting several calls a week prior to the Diskeeper install, down to a couple of calls a month. 

“The other server is a backup repository that houses our SQL backups, BackupExec files and other archived info. This server has over 10TB of storage. Since running Diskeeper, the overall performance of the server and backup times improved greatly.” 

Sandy Hyde, Senior Systems Analyst, Business Support Services, Family Insurance Solutions

Tags:

Defrag | Diskeeper

Diskeeper 2011 - Software So Evolutionary Where Can They Go From Here?

by Colleen Toumayan 26. April 2011 04:34
Diskeeper 2011 was covered on Wugnet.  Howard Sobel stated, “They introduced technology that slowed down and prevented fragmentation in Diskeeper 2010 so I thought it was impossible to improve on the concept of defrag much more. Not so! By increasing the efficiency of their algorithms, they have decreased the wear and tear on your hard disk and your computing performance while it works in the background. Decreasing the overall disk activity also decreases your electrical consumption. This may not be a huge savings on "your" electric bill but consider how much savings this amounts to in datacenters where thousands of hard disks are used. So being GREEN doesn't mean paying a penalty. In fact, the opposite is true for Diskeeper. I would go as far as declaring this the software utility "Product of the Year" if we didn't have 8 more months to go in 2011.” The full article is located here: http://www.wugnet.com/tips/this_week.asp  

 

Tags:

Defrag | Diskeeper | IntelliWrite

Nice article on CTOEdge for Diskeeper 2011

by Colleen Toumayan 18. April 2011 15:39

Michael Vizard, Industry leader and IT Editor wrote an article on Diskeeper 2011 and stated,

"But as more applications begin to share the same IT infrastructure thanks to the advent of virtualization and cloud computing, the more fragmentation becomes an I/O performance optimization issue."

 The full article is located here: http://www.ctoedge.com/content/intelligent-disk-defragmentation

Tags:

Defrag | Diskeeper | Diskeeper TV

Diskeeper Extremely Shortened and Kept Backup Times Minimized

by Colleen Toumayan 5. April 2011 05:22

Daiwa Odakyu Construction Co., Ltd.  

Problem 

Daiwa Odakyu Construction Co., Ltd did back up of 15 Servers every day using Symantec Backup Exec. But the problem of latency of backup appeared. Initially the system was designed to finish backup within 3 hours at most. But the time grew to 8 hours. They thought additional hardware was needed but had no additional budget. At that time they discovered the cause of latency was fragmentation. First they tried to solve fragments with the built-in defrag. But it solved only 20%. Next a free defrag software was tried. It proceeded to 35% and shortened backup time by more than 1 hour. But 2 days after, the backup time got back to 8 hours. 

Solution 

Mr. Kawata, manager of the Information System Group of Daiwa Odakyu, found Diskeeper® via the internet and downloaded trialware from SOHEI. Diskeeper solved all fragmentation and shortened the backup times down to 1.3 hours. What’s more, IntelliWrite® prevented fragmentation up to 99%! In accordance with their trial, the system without Diskeeper added 1 hour a day to their backup times. 

 

Back up Time(hour) Number of Fragments Number of fragments prevented
Initial design 3.0 - -
Fragmented 8.5 10,000,000以上 -
After defrag by Diskeeper 1.3 250 3,000,0007,200,000
Improvement 85% 99.9975% (断片化防止率99%)

 “I couldn’t estimate such a big influence of fragmentation in the 50% used volume.  The incredible power of Diskeeper is surprising me. The massive fragmentation I gave up trying to solve is now easily defragmented and prevented by Diskeeper, and then this leads to prevent latency of back up completely.  

“The trial without Diskeeper, which resulted in adding 1 hour a day to backup time, scared me so much. With increasing numbers of files, the backup caused massive fragmentation and gets into a dangerous situation. Less than $1,000 investment in Diskeeper is equal to more than $10,000 worth of Servers. I think Diskeeper is vital for file servers and backup servers."  

“I’m thankful to the developers of this surprising software.” 

System: 

HP DL380G4 Xeon E5430 4GB Memory 410GB SAS RAID10(146GB SA*6

HP StorageWorks MSA60 3.41TB SATA RAID10(750GB SATA*10  

 

Tags:

Defrag | Diskeeper | Success Stories

Fragmentation and Data Corruption

by Michael 31. March 2011 04:54

Diskeeper (data performance for physical systems) and V-locity (optimization for virtual systems) are designed to deliver performance, reliability, longer life and energy savings. Increased performance and saved energy from our software are relatively easy to empirically test and validate. Longer life is a matter of minimizing wear and tear on hard drives (MTTF) and providing an all around better experience for users so they can continue to be productive with aging equipment (rather than frequent hardware refreshes).

Reliability is far more difficult to pinpoint as the variables involved are difficult, if not impossible, to isolate in test cases. We have overwhelming anecdotal evidence from customers in surveys, studies, and success stories that application hangs, freezes, crashes, and the sort are all remedied or reduced with Diskeeper and/or V-locity.

However, there is a reliability "hard ceiling" in the NTFS file system; a point in which fragmentation/file attributes become so numerous reliability is jeopardized. In NTFS, files that hit the proverbial "fan", and spray out into hundreds of thousands and millions of fragments, result in a mess that is well... stinky.

In short, fragmentation can become so severe that it ultimately ends up in data loss/corruption. A Microsoft Knowledge Base article describes this phenomenon. I've posted it below for reference:

A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations.

In this scenario, you may experience one of the following issues:

When you try to copy a file to a new location, you receive the following error message:
In Windows Vista or in later versions of Windows
The requested operation could not be completed due to a file system limitation
In versions of Windows that are earlier than Windows Vista
insufficient system resources exist to complete the requested service
When you try to write to a sparse file from the Application log, Microsoft SQL Server may log an event that resembles the following:
In Windows Vista or in later versions of Windows
Event Type: Information
Event Source: MSSQLSERVER

Description: ...
665(The requested operation could not be completed due to a file system limitation.) to SQL Server during write at 0x000024c8190000, in filename...
In versions of Windows that are earlier than Windows Vista
Event Type: Information
Event Source: MSSQLSERVER

Description: ...
1450(Insufficient system resources exist to complete the requested service.) to SQL Server during write at 0x000024c8190000, in file with handle 0000000000000FE8 ...
When a file is very fragmented, NTFS uses more space to save the description of the allocations that is associated with the fragments. The allocation information is stored in one or more file records. When the allocation information is stored in multiple file records, another structure, known as the ATTRIBUTE_LIST, stores information about those file records. The number of ATTRIBUTE_LIST_ENTRY structures that the file can have is limited.

We cannot give an exact file size limit for a compressed or a highly fragmented file. An estimate would depend on using certain average sizes to describe the structures. These, in turn, determine how many structures fit in other structures. If the level of fragmentation is high, the limit is reached earlier. When this limit is reached, you receive the following error message:

Windows Vista or later versions of Windows:
STATUS_FILE_SYSTEM_LIMITATION The requested operation could not be completed due to a file system limitation

Versions of Windows that are earlier than Windows Vista:
STATUS_INSUFFICIENT_RESOURCES insufficient system resources exist to complete the requested service

Compressed files are more likely to reach the limit because of the way the files are stored on disk. Compressed files require more extents to describe their layout. Also, decompressing and compressing a file increases fragmentation significantly. The limit can be reached when write operations occur to an already compressed chunk location. The limit can also be reached by a sparse file. This size limit is usually between 40 gigabytes (GB) and 90 GB for a very fragmented file.  

WORKAROUND
For files that are not compressed or sparse, the problem can be lessened by running Disk Defragmenter. Running Disk Defragmenter will not resolve this problem for compressed or sparse files.

Tags:

Defrag | Diskeeper | Success Stories | V-Locity

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar