Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

The Biggest Missed Culprit in SQL Performance Troubleshooting

by Brian Morin 18. February 2015 09:53

"We didn't know how much of our SQL performance was being dampened by the nasty 'I/O blender' effect….."

As it turned out, it was HALF. 

That's right. Their systems were processing HALF as many MB/sec than they should due to the noise of all their VM workloads meeting and mixing at the point of the hypervisor. The first thing the "I/O blender" effect does is tax throughput, so your application performance becomes far more dependent on storage IOPS than it needs to be.

Read the full story how I.B.I.S., Inc. doubled performance of their CRM and ERP by eliminating the I/O
blender effect ->
 

So what is the "I/O blender" effect and how is it taxing application performance? 

The "I/O blender" effect is a phenomena specific to a virtual server environment where the I/O streams from disparate VMs are "funneled" together at the point of the hypervisor before sending out to storage a very random I/O stream that penalizes overall application performance.

Every organization that has virtualized has experienced this pain. They virtualized their applications only to discover mounting I/O pressure on the backend storage infrastructure. This was the unintended consequence of virtualization. Organizations save costs on the compute layer via virtualization only to trade those savings to backend storage where a forklift upgrade is necessary to handle the new random I/O demand.

In the case of I.B.I.S., Inc., their IT Director wanted to look into this problem a little further to see what could be done before reactively buying more storage hardware for improved performance.

"We wanted to try V-locity® I/O reduction software first to see if it could tackle the root cause problem as advertised at the VM level where I/O originates," said Kevin Schmidt, IT Director.

As much as IT departments lack monitoring tools that show exactly how much performance is dampened by the "I/O blender" effect, V-locity comes with an embedded benchmark to give a before/after picture of I/O reduction and demonstrate how much performance is improved by combatting this problem at the Windows operating system layer.

As it turned out, I.B.I.S., Inc.'s heaviest SQL workloads saw a 120% improvement in data throughput. Before V-locity, it took 82,000 I/Os to process 1GB of data. After V-locity, that number was cut to 29,000 I/Os per GB. Due to the increase in I/O density, instead of taking .78 minutes to process 1GB, it now only takes .36 minutes.

"Since we're no longer dealing with so many small split I/Os and random I/O streams, V-locity has enabled our CRM and ERP systems to process twice the amount of data in the same amount of time. The best part is that we didn't have to spend a single dime on expensive new hardware to get that performance," said Schmidt.

Read the full case study ->

Tags: , , , ,

Disruption, Application Performance, IOPS | virtualization | V-Locity

Storage VMotion and GOS fragmentation

by Michael 3. December 2010 06:57

I had a test run here internally in order to make a point about what does, or more specifically "does not", happen when you VMotion/SVMotion a Windows Guest OS (GOS). We wanted to demonstrate that, while VMware is copying the VM to another host/storage, it does nothing about the internal fragmentation of files in Windows.

We felt this was a valuable demonstration as one of the old (1980s) ways to "fix" fragmentation was to copy off the files/backup, reformat the volume, and then copy back/restore. This offered a degree of success, but required taking the data offline in order to get rid of most of the fragmentation. On a side note, backing up/copying fragmented files takes a lot longer than it would on contiguous and ordered files.

Anyway, S/VMotion is such a cool feature because it works on live VMs. So, if the VMDK movement somehow did align/reorder files in Windows, it could be a great solution to Windows file system fragmentation! So here's how we tested...

1. Setup 2 ESX 4.1 Servers with iSCSI storage and vCenter with SVMotion capability.

2. Create a VM with Windows 7 in one of the ESX Server storage (Ex: Storage1) and a 20 GB Thin virtual disk.

3. Using an internal tool, create moderate fragmentation on the virtual disk (80k fragments, average fragments per file around 3.0, around 50% free space).

4. Install V-locity with all features (e.g. defrag, IntelliWrite, etc...) disabled. This is just so we can run a fragmentation analysis and save the reports.

5. Save the "Before SVMotion" analysis report, and then stop V-locity Windows Service (to make sure it is entirely inactive).

6. Using SVMotion move the live VM to the other ESX Server storage (Ex: Storage2).

7. Once the move is completed, restart the V-locity Windows Service and perform a post "After SVMotion" analysis.

8. Save this job report.

We saw what we expected, given VMotion leverages Changed Block Tracking (CBT) technology and is block, not file based. I attached the report, so you can see the side-by-side analysis data, files in Windows are not defragmented in an SVMotion. Now, that's not to say possible fragmentation of the VMDK files themsleves (on VMFS datastores) was not affected, but that's a topic for another post. 

Help Your Enterprise Solve Problems Created By New Technologies

by Colleen Toumayan 8. October 2010 05:13

Much has changed in the data center, and yet much remains the same. There’s greater reliance on storage network systems, and virtualization is leveraging more performance from fewer systems.

But at the same time, the majority of server and data center storage remains based on hard drives. File fragmentation is still a concern, too. In fact, fragmentation creates even more complications in the age of SANs and VMs.  

A new article in Processor Magazine details this. Read it here.

Tags: , , , , , ,

RecentComments

Comment RSS

Month List

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar