Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

The Student Enrollment Blues

by Scott Thomas 21. January 2014 06:58

It was a bad day for Steve Bettoni, Tech Support and IT Procurement Officer of Stockport College. Steve is responsible for the infrastructure that supports the college's enrollment apps, SAP, and Exchange, so you can imagine his level of anxiety on enrollment day—the busiest day for any education institution—when the system crashed and the finance director got involved.

Steve's team had already spent hours, days, and weeks troubleshooting helpdesk calls related to poor application performance. The team virtualized to get more from legacy systems, but with the increase in I/O traffic, application response time began to suffer.

Steve heard about Condusiv’s V-locity acceleration software and its ability to solve the toughest application performance challenges without adding new hardware. A bit skeptical, he decided to evaluate the solution—and dropped enrollment processing times from 15 minutes down to 5. Convinced of what V-locity could do for their applications, he deployed on other VMs and saw a 522% improvement with SAP and 85% improvement with Exchange.

Read the full story: Stockport College case study.

When Big Data Hurts

by Robin Izsak 5. December 2013 03:38

I recently spoke with Bell Mobility's Adam Moore, a member of the organization's OSS Systems Integration Team. Bell Mobility is Bell Canada's wireless division, employing a multitude of analysts who eat, sleep, and breathe Big Data. They capture metrics and run analytics on call failures, call drops, and call volume—helping the company provide better service to their customers.

Sure we hear a lot about Big Data these days, but we need these catch phrases to talk about abstract concepts. And Big Data is as important as it is abstract: it represents a smarter way to do business, to create value from all this data we have, and to make better decisions. Big Data enables Bell Mobility directors to pinpoint inefficiencies and see where optimization is needed to maintain optimal services to a broad customer base.

So on our call, Adam told me that things had slowed down. His users were dealing with longer and longer SQL query times, which was impacting their ability to do their jobs. Faced with significant data growth and a need for faster delivery of that data to meet SLAs with their users, Adam's team needed a solution to escalating performance problems, like right now. 

In assessing their options, Adam and team conducted an evaluation of V-locity® VM™. The results? A 61% reduction in I/O to the SAN, which led to 98% faster data processing times. And backups? "They used to run at 10MB per minute and sometimes didn't complete at all. Now they run at 60-120MB per minute and complete consistently." 

Read more about the team's success with V-locity in the 
Bell Mobility case study.

Tags:

Big Data | Channel | Cloud | General | Hyper-V | IntelliMemory | IntelliWrite | SAN | SSD, Solid State, Flash | Success Stories | virtualization | V-Locity | VMware

SSDs and Defrag

by Alex Klein 3. August 2012 06:32

We recently responded to a forum post on our YouTube channel regarding SSDs and Defragmentation - you can view the video here: http://www.youtube.com/watch?v=hznCSqb4Mzg


Below are some "before and after" graphs that provide proof that fragmentation affects SSDs:

 

Tags: , , ,

Defrag | Diskeeper | SSD, Solid State, Flash | Windows 7

Big News! Diskeeper Corporation and SanDisk Enter Into Strategic Partnership

by Damian 21. February 2012 07:56

Diskeeper Corporation is pleased to announce that we have recently entered into a worldwide, exclusive agreement with SanDisk. SanDisk will license Diskeeper's industry-leading caching software solutions for solid state disk drives (SSDs). SanDisk will provide these solutions both as standalone software products as well as bundled with SanDisk's SSD products for client computing applications.

Here's what Diskeeper's CEO had to say: "We see our alliance with SanDisk as a critical driver to accelerate adoption of SSD computing applications. The exceptional performance and endurance of SanDisk's SSDs paired with Diskeeper's ExpressCache and NowOn products offer consumer OEM customers industry-leading performance optimization for Ultrabooks and other computer platforms."

Check out the full article from the NY Times here.

 

Setting the Record Straight - Windows 7 Fragmentation, SSDs, and You

by Howard Butler 21. January 2012 14:50

In today’s well connected world of electronics and instant communications I received a text from a friend asking if I had seen the recent PC World magazine (February, 2012).  He said it had some tidbit of information concerning one of my favorite subjects; system performance, defragmentation, and SSDs.  I located a copy here at the office and found the article. As I read the first line I realized the debate on the virtues of defragmentation especially on SSDs will be one that goes on indefinitely as no one really talks about the issue with supporting hard facts and numbers.  Most articles are rehashing ideas and opinions long since debunked.  They continue to surface because very few truly understand the intricacies of the Windows NTFS file system and that of the storage media, whether it is rotating magnetic hard disks or electronic solid state disks.

So let’s set the record straight… Fragmentation is exponentially more of a problem with today’s data explosion. Defragmenting once a week will still cause the user to experience slowdowns from the degradation effects and doesn’t address the issue when files are initially being written.  And yes, never do a traditional defrag on SSDs.

NTFS file and free space fragmentation happens far more frequently than you might guess.  It has the potential to happen as soon as you install the operating system.  It can happen when you install applications or system updates, access the internet, download and save photos, create e-mail, office documents, etc…  It is a normal occurrence and behavior of the computer system, but does have a negative effect on over all application and system performance.  As fragmentation happens the computer system and underlying storage is performing more work than necessary.  Each I/O request takes a measurable amount of time.  Even in SSD environments there is no such thing as an “instant” I/O request.  Any time an application requests to read or write data and that request is split into additional I/O requests it causes more work to be done.   This extra work causes a delay right at that very moment in time.  Whoever thought that defragmenting once a month or weekly was good enough, simply didn’t understand fragmentation.

Disk drives have gotten faster over the years, but so have CPUs.  In fact, the gap between the difference in speed between hard disks and CPU has actually widened.  This means that applications can get plenty of CPU cycles, but they are still starving to get the data from the storage.  What’s more, the amount of data that is being stored has increased dramatically.  Just think of all those digital photos taken and shared over the holidays.  Each photo use to be approximately 1MB in size, now they are exceeding 15MB per photo and some go way beyond that.  Video editing and rendering and storage of digital movies have also become quite popular and as a result applications are manipulating hundreds of Gigabytes of data.  With typical disk cluster sizes of 4k, a 15MB size file could potentially be fragmented into nearly 4,000 extents.  This means an extra 4,000 disk I/O requests are required to read or write the file.  No matter what type of storage, it will simply take longer to complete the operation.

Suppose I chose to do some editing of my family videos on Tuesday evening.  Even the built-in defragmentation tool in Windows 7 doesn’t do me much good because it isn’t schedule to run until Wednesday morning at 1:00am.  This also means that quite a bit of fragmentation has built up since the previous week when it last ran.  Maybe I’ll manually run it, but that can take quite a while and I’ve wasted time that I would have rather spent on my project.  Unfortunately, the Windows built-in defragmentation utility doesn’t prevent fragmentation so even after running it manually; I still will wind up with fragmentation and slow access speed of my newly created files. 

I’ve often thought about why Wednesday at 1:00am was chosen as the time to schedule defragmentation.  Why isn’t it scheduled all the time?   It is because there could be system resource conflicts that either interfere with getting the task done or the defragmentation process has difficulty throttling back under a variety of conditions.  Regardless, this wait a week to clean up fragmentation doesn’t really help me when I need it most.

As pointed out in the article, the built-in defragmenter does not have the technology advancement to properly deal with fragmentation and SSDs. The physical placement of data on an SSD doesn’t really matter like it does on regular magnetic HDDs.  With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn’t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD’s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O’s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD.  In essence, this makes for more sequential I/O operations which is generally faster and outperforms random writes.

In addition, SSD’s require that old data be erased before new data is written over it, rather than just writing over the old information as with HDDs.  This doubles the wear and tear and can cause major issues with the speed performance and lifespan of the SSD.  Most SSD manufactures have very sophisticated wear-leveling technologies to help with this. The principle issue is write speed degradation due to free space fragmentation.  Small free spaces scattered across the SSD causes the NTFS file system to write a file in fragmented pieces to those small available free spaces.  This has the effect of causing more random I/O traffic that is slower than sequential operations.

I think I have clearly made my point….  The built-in defragmenter in Windows 7 is not a solution for neither the consumer/home user, nor the enterprise business user.  Data access speeds are far more critical in the business world where time is money.  In the enterprise environment there are generally many more files that are used by higher number of users that are accessing data across shared type of storage such as SAN.  Even virtual platforms benefit from the same points covered.  This opens the door and is the reason why robust solutions such as Diskeeper exist.  More data about Diskeeper and the superior technology it offers can be found at http://www.diskeeper.com.

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar